Power BI Embedded … confused?

Power BI Embedded … confused?

So I wanted to write this post, as I’ve gotten a lot of questions about this over the past year. Power BI embedded is a pretty awesome tool. The idea is this, “I want to get cool visualizations into my application, how do I do that?” The answer is Power BI Embedded, here’s a video for those unfamiliar with the product.

But for me, the question that usually comes next is the one I want to cover here. “How do I get this?” There’s a lot of confusion when it comes to Power BI, and that’s because it really comes from a couple of places.

See the source image

Explaining the types of Power BI:

There are essentially three flavors of Power BI:

Power BI Pro: These are individual licenses for those who will be working on the backend to build visualizations, and could be provisioning capacity in azure.

Power BI Premium:  This service is designed around providing dedicated capacity for running data refreshes, and visualizations for your Power BI implementation.  This allows for managing workspaces in the Power BI portal, and additionally does support the embedded functionality.  The primary difference here is that this is an Office 365 sku, so a partner has to purchase licenses through their reseller to add capacity.  Each license (EM1, EM2, EM3, P1, P2,  P3) provides different capacity, found here.

One item worth mentioning on the above skus is that you will see the cores separated into “Back-end” and “Front-end”, the backend cores are responsible for data refreshes, and the front-end cores are for visualization.  This is important because if you implement an EM1 sku, then you are sharing 1 core and it can cause issues with timeouts.

Power BI Embedded:  This service is more targeted at ISVs, and leverages Azure to generate the capacity, the skus are basically identical but the primary difference is that you can add capacity through the azure portal, and it is allocated on a consumption model. So ultimately this can be cheaper, and capacity is allocated easier should they need to add capacity.

At its core, how does this work?

Power BI functions on this idea of workspaces, which are created in the Power BI portal, and then PBIX files with data sets and visualizations are uploaded to it.  Once a workspace is available, capacity has to be added for the processing.  This capacity can come from office skus or Azure depending on how you configure it.

So let me answers some questions about what you want to do?

I wanted to render visualizations in my application, how do I do that?

For this use case you really want Power BI Embedded, with a few Power BI Pro licenses. To purchase Power BI premium requires working with a reseller to purchase licenses and then working strictly with an office portal to support creation of the capacity. You will also be paying for a lot of features you really don’t care about.

For Independent Software Vendors (ISVs) it makes a lot more sense to just buy Power BI embedded, its transacted in the azure portal which makes it very easy to create capacity and scale up as needed.

You will need Power BI Pro licenses as well, for the following use cases, but these are really cheap (a few dollars at the time of this post).

  • Any developer who will be building visualizations.
  • Any operations person (or service account) that will be provisioning or managing capacity.
  • Any service accounts that will be handling communication between the application UI and Power BI. This is required because without a license you will be throttled on the number of request tokens you can generate.

What is the difference between Gov and Commercial in PowerBI?

So for implementation for Power BI, you will require Power BI Pro licenses for the following:

  • Developers working on Power BI visualizations
  • Administrators who manage the Power BI Workspaces
  • Service Accounts from Apps that leverage Workspaces

For Government specifically you cannot access the Power BI Embedded functionality in the Portal without a Power BI Pro licensing. 

One thing worth mentioning, is if you are purchasing Power BI Pro licenses with the intention of using Government, you will need Power BI Pro GCC High, as these are the only licenses that can attach to your Azure AD accounts in the Government Cloud.

How do I purchase Power BI licenses?

Here is a link that talks you through purchasing Power BI Premium. For Power BI Embedded, here’s a link that explains the process in the azure portal.

How do I know how much capacity I need?

There is a great link here that talks about the different skus for Power BI Embedded and specifically it empowers you to chose the appropriate memory and vcore configuration to provision for your workload.

So the question becomes, “How do I know how much I need?”

The capacity required really depends on four elements:

  1. The amount of data being sent over and consumed.
  2. The complexity of any transformations done within Power BI.
  3. The complexity of the visualization.
  4. The demand on the application.

Here’s a whitepaper that was put out for Power BI Capacity planning for Embedded.

Hope that helps?

Hope that helps you with understanding the licensing of Power BI. There is a lot of confusion here and I hope this clears it up. So hopefully with any luck you have a better idea.

See the source image
Weekly Links – 3/29

Weekly Links – 3/29

So life has been interesting during this self-quarentine. For my family we are luckier than most in that we have been able to rely on my job, as well as been pretty used to working from home (5 years of it). But honestly I know there are others struggling, and also terrible in my mind is the number of people who are single, and live alone that are in isolation right now. Please take some time to do some things to help yourselves. Mental health is important.

To that end, here’s so quick hitter tips we found this first week:

  • Now is the time to adopt a new routine, build habits that you’ve been too busy to do. Force yourself to structure parts of your life that were previously unstructured.
  • Take time to walk away and be alone, focus on yourself and your health.
  • Find way to engage with friends. In my case we took our monthly DnD game and made it weekly on roll20.net, and its been great. The opportunity to do something different has been huge.
  • Learn something new, give yourself an objective and work towards it. Before this I had planned on learning to use a smoker, and had been putting it off forever, now I’m finally doing it.
See the source image

Down to Business..

Fun Stuff:

So a lot of movies have been coming straight to digital right now and the one I’m most excited about is Bloodshot, its like Memento meets Rambo, and I’m excited. I’ve been a huge fan of the comic for a while now, pretty much since the valiant relaunch. But you should check it out on your streaming service of choice. Here’s a trailer.

Governance in Azure Dev Ops

Governance in Azure Dev Ops

So I’ve been a big fan of Azure Dev Ops, since its original version. The idea of creating something for operations to manage in Team Foundation Server always made me shake my head. Developer tooling, methodology and needs are always changing, so a SaaS offering really is the best option. I always laugh when people question this idea, because GitHub.

Now the one thing I’ve seen a lot of questions around though, is Governance. Because after all, as you adopt tools like this, they become mission critical to all of your software development efforts. So how do I configure this thing to be setup correctly.

There are several things you can to make sure you’ve configured governance for Azure Dev Ops. I’m going to cover some of the high points here to get you started.

Leverage Azure AD

One of the biggest recommendations I have is to leverage Azure AD for your backend identity store if at all possible. The reason I say this is that Azure AD provides a robust identity store, that allows for managing users in groups. It is my belief that no individual users should be added to Azure Dev Ops, but instead that groups should be leveraged as part of Azure AD. This provides several options which can help with the management.

  • Enable authentication with the same accounts your users leverage for Microsoft 365 and the Azure Portal.
  • If you don’t use those services you can leverage federation with other identity store to provide integration with existing IDPs.
  • Simplifies management for adding users and exit procedures.
  • Keeps access to only those that need the tool, this keeps people like internal IT operations from having admin permissions in ADO, and rather limits them to the Azure AD.
  • Able to integrate more advanced features like MFA.

You can configure access with Azure AD by following this. And here’s a great article that walks through how to create organizations and connect them with Azure AD.

Use Organizations

Organizations are keys to Azure DevOps governance, as they provide a means of creating access controls around projects, work item tracking, source control, pipelines, etc.

Now when it comes to structuring your organization, there are several things you can do to ensure success. Here are some general recommendations from the ADO documentation.

Really you want to make sure you structure your projects and repos according to the work your doing, and potentially leverage the idea of teams where needed.

I find there are a couple of things to keep in mind when you structure your organization, and I’m going to give those instead of specific guidance and a “This is the only structure that works”.

  • Projects should be self contained – Projects themselves should include all components from work items, source code, pipelines, and testing for a single project.
  • Projects should represent a key security boundary. This is really easy, so if you take the above, and make these as self-contained. Ultimately it makes it really easy to make this a security boundary.
  • Break out repos within a project. I am not a fan of making every element of your project exist in one repo. This causes a lot more problems when you try to build your pipeline and can make your builds take forever.
  • Group work items and boards in such a way that makes sense for your team. I’m a big ALM guy but I have to be honest too many times people say “But agile says…” and the truth is in the agile manifesto it calls out the need to focus on delivering software and doing so in a way that makes sense.

Ultimately there are lots of ways to do the above, but these are my recommendations for where to start.

Here are some additional links to help get you started:

Weekly Links – 3/23

Weekly Links – 3/23

So we’ve all been home for a week now, and honestly most of the world is still grappling with working from home. And having made that transition over the past 2 years, I know how big a transition. And I honestly hope everyone is taking time to step back and be healthy both physically and mentally. Otherwise we end up here:

See the source image

Down to business…

Fun Stuff:

So before the craziness of COVID-19, we got the opportunity to take the family to see Onward, the new Pixar movie. And honestly this was amazing, a great story with great performances. The story is timeless, the relationship between brothers and figuring out who you are by figuring out where you came from is amazing. The story of the relationship between a son and his father hit me right in the feels. A truly amazing movie, and honestly if you have dry eyes at the end, I question if you are a human or a Cylon.

Things I listen to…Podcasts / Audio Books

Things I listen to…Podcasts / Audio Books

Hello all, a shorter post this week. One question I’ve been asked a few times, so I thought I would put together a post on it. I listen to a lot of podcasts, and audio books. Personally I like this because it gives me an opportunity to take advantage of times I might not have and make them productive.

For Tech Podcasts, I listen to:

DotNetRocks – This really is a great and long running technical podcast, that I have been listening to for easily 7 years. They cover a wide array of topics but rerally have a great collection of guests. I was lucky enough to be a guest on this podcast once and I have to tell you it made me much more of a fan.

Azure Monk – Cloud Solution Architects – A shameless plug for a friends podcast, Anand is a great architect and very gifted technology specialist, and his podcast involves interviewing a other architects. The part I like about this is hearing the disparate backgrounds that leads everyone to this field..

Hanselminutes – To me this is a must have for anyone in technology. Scott Hanselman is “everyone’s favorite developer” and to be honest what I really enjoy about this podcast is the breadth of topics. He covers everything from coding, game development, mathematics, soft skills, and just about anything else that comes to mind.

For more fun stuff I enjoy these podcasts when I need a break:

Critical Role – Ok, I’ve made no secret of the fact thatt I am a huge nerd, and a big fan of gaming whether it be video games, or most recently table-top games. Recently I’ve dove into Dungeons and Dragons. To that end I’ve really enjoy Critical Role as a podcast, as it really does a good job of capturing everything I enjoy about the storytelling medium.

Angel of Vine – A fictional podcast that documents an investigation during Hollywood’s golden era, and has some major name actors behind it. Leads to some great mysteries and storytelling, definitely recommend it.

Welcome to Night Vale – A bizarre podcast that can be a lot of fun if you are actually looking for something weird. Welcome to Night Vale is very much a podcast that embraces Cthulu and that Lovecraft-ian horror. Like I said it a podcast for someone who just wants something off beat.

Decoder Ring Theatre – Going to get a little personal here, when I was growing up, I had a great uncle who introduced me to the Shadow, the original radio program, and it is part of what turned me on to comics in general. So when I found this podcast series that is modeled on old radio dramas, I absolutely jumped at it. The adventures of the Red Panda are absolutely something me and my amazing wife enjoy.

And for additional listening, I highly recommend the following audio books:

The Infinite Game – Simon Sinek – This is my most recent read, and it really is a fascinating book. The idea of the infinite game has so many implications to a lot of our life, but specifically the ways of attaching this to career and your goals is absolutely fascinating. This is a very dense but powerful book, with a lot to digest.

Just Mercy – Bryan Stephenson – This is a really powerful, and at the same time brutal read. The story is a powerful one, about standing up for what’s right, and not backing down. And the underlying story of empathy is a truly powerful one. I will say that if you are looking for “light reading” this is not it. But it is a powerful story about standing up to protect people from the hate off the world, and what I would consider to be “true evil” in this world.

Retired Inspired – Chris Hogan – This really is a great book for everyone when its comes to managing your finances. The advice in here is powerful for retirement and I recommend this to anyone and everyone who is working. It’s never to early to think about retirement.

Grit – Angela Duckworth – Really is an amazing book, and definitely changes your mindset with regard to what is the difference between talent and effort.

Mindset – Carol Dweck – Another amazing book that when combined with the above will really help you to understand the difference between a Growth Mindset and a Fixed Mindset and the overall impact on your life of each.

Weekly Links – 3/16

Weekly Links – 3/16

So this week had a few things going on. Mainly as COVID-19 grips the united states, things have been more than a little crazy.

So down to business…

Fun Stuff

Ok, so I have to admit there has been a lot of stuff going on and a lot to look at. Right now I’ve been really enjoying the John Hickman run on X-Men. Now full transparency, I have never liked X-men, I don’t like the movies, I don’t like the shows, I don’t like the comics. For me it feels like there is only ever one story, “I have powers and people hate me.” But Hickman did the impossible and gave it an interesting spin.

What are some things I can do to improve application availability?

What are some things I can do to improve application availability?

So I’ve done a few posts on how to increase the availability and resiliency for your application, and I’ve come to a conclusion over the past few months. This is a really big topic that seems to be top of mind for a lot of people. I’ve done previous posts like “Keeping the lights on!” and “RT? – Making Sense of Availability?” which talk about what it means to go down this path and how to architect your solutions for availability.

But another question that comes with that, is what types of things should I do to implement stronger availability and resiliency in my applications? How do I upgrade a legacy application for greater resiliency? What can I do to keep this in mind as a developer to make this easier?

See the source image

So I wanted to compile a list of things you should look for in your application, that changing would increase your availability / resiliency, and / or things to avoid if you want to increase both of these factors for your applications.

So let’s start with the two terms I continue to use, because I find this informs the mindset to improving both and that only happens if we are all talking about the same thing.

  • Availability – Is the ability of your application to continue operations of a critical functionality, even during a major outage or downtime situation. So the ability to continue to offer service with minimal user impact.
  • Resiliency – Is the ability of your application to continue processing current work even in the event of a major or transient fault. So finishing doing work that is currently in-progress.

So looking at this further, the question becomes what kinds of things should I avoid, or remove from my applications to improve my position moving forward:

Item #1 – Stateful Services

Generally speaking this is a key element in removing issues with availability and resiliency, and can be a hotly debated issue, but here’s where I come down on this. If a service has a state (either in memory or otherwise) it means that for me to fail over to somewhere else becomes significantly more difficult. I know must replicate that state, and if that’s done in memory, that becomes a LOT harder. If its a separate store, like SQL or Redis, it becomes easier, but at the same time requires additional complexity which can make that form of availability harder. This is especially true as you add “9”‘s to your SLA. So generally speaking if you can avoid having application components that rely on state its for the best.

Additionally, stateful services also cause other issues in the cloud, including limiting the ability to scale out as demand increases. The perfect example of this is “sticky session” which means that once you are routed to a server once, we keep sending you to the same server. This is the antithesis of scaling out, and should be avoided at all cost.

If you are dealing with a legacy application, and removing state is not feasible, then at the minimum you would need to make sure that state is managed outside of memory. An example being if you can’t remove session, move it to SQL and replicate.

Item #2 – Tight Couplings

This one points to both of the key elements that I outlined above. When you have tight coupling between application components you create something that can ultimately fail and doesn’t scale well. It prevents the ability to build a solution that can scale well.

Let’s take a common example, let’s say you have an API tier on your application, and that api is built into the same web project as your UI front end. That API then talks directly to the database.

This is a pretty common legacy pattern. The problem this creates is that the demand of load your web application, and the backend api are very tightly coupled, so a failure in one means a failure in others.

Now let’s take this a step further and say that you expose your api to the outside world (following security practices to let your application be extensible. Sounds all good right.

Except when you look deeper, by having all your application elements all talking directly to each other you know created a scenario where cascading failures can completely destroy your application.

For example, one of your customers decides to leverage your api pretty hard, pulling a full dump of their data every 30 seconds, or you sign up a lot of customers who all decide to hit your api. It leads to the following affects:

  1. The increase demand on the api causes memory and cpu consumption on your web tier to go up.
  2. This causes performance issues on your applications ability to load pages.
  3. This causes intermittent areas that cause transactions against the api to demand higher SQL demand. Increased demand on SQL causes your application to experience resource deadlocks.
  4. Those resource deadlocks cause further issues with user experience as the application fails.

Now you are probably thinking, yes Kevin but I can just enable autoscaling in the cloud and it solves all those issues. To which my response is, and uncontrolled inflation of your bill to go with it. So clearly your CFO is OK with uncontrolled costs and inflation to offset a bad practice.

One scenario where we can resolve this awfulness is to split the API to a separate compute tier, by doing so we can manage the compute separately without having to wildly scale to offset the issue. I then have separate options for allowing my application to scale.

Additionally I can implement queues as a load leveling practices which allows for making my application scale only in scenarios where queue depth expands beyond reasonable response time. I can also throttle requests coming from the api or prioritize messages coming from the application. I then can replicate the queue messages to provide greater resiliency.

Item #3 – Enabling Scale out

Now I know, I just made it sound like scaling out is awful, but the key part to this is “controlled.” What I mean here is that by making your services stateless, and implementing practices to decouple you create scenarios where you can run one or more copies of a service which enables all kids of benefits from a resiliency and availability perspective. It changes your services from pets to cattle, you no longer care if one is brought down, because another takes its place. It’s sort of like a hydra, is a good way of thinking about it.

Item #4 – Move settings out of each piece of an application

The more tightly your settings and application code are connected, the harder it is to make changes on the fly. If your code is tightly coupled, and requires a deployment to make a configuration change it means that should you need to change an endpoint, it is an increasingly difficult thing to do. So the best thing you can do is start moving those configuration settings out of your application. No matter how you look at it, this is an important thing to do. For reasons relating to:

  • Security
  • Maintainability
  • Automation
  • Change Management

Item #5 – Build in automated deployment pipeline

The key to high availability comes down to automation a lot of times, especially when you hit the higher levels of getting more 9’s. The simple fact is that seconds count.

But more than that, Automated Deployments help to manage configuration drift, a simple fact is that the more you have configuration drift the harder it is to maintain a secondary region because you have to manage making sure that one region doesn’t have things the other does not. This is eliminated by forcing everything to go through the automated deployment pipeline. If every change must be scripted and automated, it is almost impossible to see configuration drift happen in your environments.

Item #6 – Monitoring, Monitoring, Monitoring

Another element of high availability and resiliency is monitoring. If you had asked me years ago about the #1 question most developers think of as an afterthought it was “How do I secure this?” And while that is a question a lot of developers still somehow treat as an afterthought, the bigger one is “How do I monitor and know this is working?” Given the rise of micro services, and server-less computing, we really need to be able to monitor every piece of code we deploy. So we need hooks into anything new you build to answer that question.

This could be as simple as building in logging for custom telemetry into Application Insights, or logging incoming and outgoing requests, logging exceptions, etc. But we can’t make sure something is running without implementing these metrics.

Item #7 – Control Configuration

This one, I’m building upon comments above. The biggest mistake that I see people get to with regard to these kinds of implementations is that they don’t manage how configuration changes are made to an environment. Ultimately this leads to a “pets” vs “Cattle” mentality. I had a boss once in my career who had a sign above his office that said “Servers are cattle, not pets…sometimes you have to make hamburgers.”

And as funny as the above statement is, there is an element of truth to it. If you allow configuration to be changes and fixes applied directly to an environment, you create a situation where it is impossible to rely on automation with any degree of trust. And it makes monitoring and every other element of a truly high available or resilient architecture completely irrelevant.

So the best thing you can do, leverage the automated pipeline, and if any change needs to be made it must be pushed through the pipeline, ideally remove peoples access to production for anything outside of read for metrics and logging.

Item #8 – Remove “uniqueness” of environment

And like above, we need to make sure everything about our environments is repeatable. In theory I should be able to blow an entire environment away, and with a click of a button deploy a new one. And this is only done through scripting everything. I’m a huge fan of terraform to help resolve this problem, but bash scripts, powershell, cli, pick your poison.

The more you can remove anything unique about an environment, the easier it is to replicate it and create at minimum an active / passive environment.

Item #9 – Start implementing availability patterns

If you are starting down this road of implementing more practices to enhance the resiliency of your applications, there are several practices you should consider that as you build out new services would help to create the type of resiliency you are building towards. Those patterns include:

  • Health Endpoint Monitoring – Implementing functional checks in an application to ensure that external tools can be leveraged to help.
  • Queue-Based Load Leveling – Leveraging queues that act as a buffer, or put a layer of abstraction between how your applications handle incoming requests in a more resilient manner.
  • Throttling – This pattern helps with managing resource consumption so that you can meet system demand while controlling consumption.
  • Circuit Breaker – This pattern is extremely valuable in my experience. Your service should be smart enough to use an incremental retry and back off if a downstream service is impacted.
  • Bulk Head – This pattern leverages separation and a focus on fault tolerance to ensure that because one service is down the application is not.
  • Compensating Transaction – If you are using a bulkhead, or any kind of fault tolerance, or have separation of concerns its important that you be able to roll a transaction back to its original state.
  • Retry – The most basic pattern to implement and essential to build transient fault tolerance.

Item #10 – Remember this is an evolving process

As was described earlier in this post, the intention here are that if you are looking to build out more cloud based functionality, and in turn increase the resiliency of your applications, the best advice I can give is to remember that this is an iterative process and to look for opportunities to update your application and to increase resiliency.

For example, let’s say I have to make changes to an API that sends notification. If I’m going to make those updates, maybe I can implement queues, logging and make some changes to break that out to a micro service to increase resiliency. As you do this you will find that your applications position will improve.

Aligning Actions and Values

Aligning Actions and Values

So I did a post in January around the idea of goals and aligning values. And I talked about the idea of making sure that actions you take align with your values and at the end of the day that’s what matters.

So I’ve gotten questions from colleagues who read the blog post about what does it means to actually align actions to values, and how do you do that. So I wanted to take a minute to drill down on this topic and really quantify what this means.

Many of you have likely heard of the Urgency / Importance matrix, this is a productivity idea that has really gained momentum with a lot of experts, but specifically with Dr. Stephen Covey (7 habits of highly effective people). The idea behind it is this, every action or demand placed on you has two aspects that you should use to judge it.

  • Urgency
  • Importance

Urgency is the one everyone gets, at the end of the day this is how quickly it requires my attention. But I would actually argue that a lot of people (including me) get this part wrong. The idea here is how urgent is the required action.

The challenge I would push back on people is that a lot of times we let urgency be dictated by others. So in its truest sense, I believe a lot of people, and myself included become addicted to urgency. We get this believe that if we don’t act right away we will miss out or fail in some way. Just because this is an immediate need for one person, does not mean it is for another. And there is almost a social contract here where by we need to make sure to set expectations accordingly. And honestly, that’s an entire blog post itself.

The result of that is we use urgency as the sole aspect by which we prioritize our efforts. And that is where, as Scott Hanselman says “you time travel”, we get caught up in the urgent, and email is the worst example of this. And then we don’t feel like we accomplish anything.

The second aspect of any activity is importance, and this is the one that usually trips people up, “how do you define importance?” Now here’s the magic, for me the importance of an item directly correlates to the values I am driven by. As I talked about in my last blog post, I have gone with the idea of value based living, so for me, the definition of important is a binary decision “Does this align with my values?”

Now below is the urgent / important matrix that many authors and researchers reference as being the key to maintaining focus.

Now I’m going to steal from Scott Hanselman, as I think he sums it up best with his reaction to each of these:

UrgentNot Urgent
ImportantDo it nowDecide when to do it
Not ImportantDelegate ItDump It

So the key parts here are this gives a roadmap for how to align activities to your values, and then decide the appropriate action. The idea behind this being that at the end of the day, I only have a finite number of hours left in my life, and can only succeed at some many things, so I should focus my energies on items that align to values and are important to me (see what I did there).

So for example, I’ll be candid with you, my loyal readers here, my values are the following:

  • Family
  • Innovation
  • Learning
  • Impact
  • Creativity
  • Achievement

So for me, I’m really trying (not always succeeding, but trying) to make sure that I align my activities to things that fall in these 6 buckets. And by putting my energy into those values I’m making sure that my actions will drive a maximum impact in core areas that matter to me.

Like for example, its not arbitrary that the items up there are in that order, Family is always going to be the most important thing for me, and I will always prioritize actions for my family, like making sure my daughter is successful, over other activities.

But basically what I’m saying is for me, it doesn’t rate as important, unless it relates to those values above and driving success in those areas. As I mentioned I’ve put this together based on the works of Greg McKeown (Essentialism), Angela Duckworth (Grit), Mike Michalowicz (Clockwork) and a few tips from Scott Hanselman. Below is a great talk that Scott gave on scaling yourself:

Weekly Links – 3/2

Weekly Links – 3/2

Hello all, here’s another round of weekly links. And its been a crazy week but with no travel which is awesome. Especially given all the concerns with travel and viruses.

Down to business…

Fun Stuff

I’ve made no secret of my Fandom attached to dungeons and dragons and recently the opening cinematic for baldurs gate 3 dropped and it is amazing. It can be found here.

How to learn TerraForm

How to learn TerraForm

So as should surprise no-one, I’ve been doing a lot of work with TerraForm lately, and I’m a huge fan of it in general. Recently doing a post talking about the basics of modules. (which can be found here).

But one common question I’ve gotten a lot of is how to go about Learning TerraForm. Where do I start? So I wanted to do a post gathering some education resources to help.

First for the what is TerraForm, TerraForm is an open source product, created by HashiCorp which enables infrastructure-as-code, specifically designed to be cloud vendor agnostic. If you want to learn the basics, I recommend this video I did with Steve Michelotti about TerraForm and Azure Government:

But more than that, the question becomes how do I go about learning TerraForm. The first part is configuring your machine, and for that you can find a blog post I did here. There are somethings you need to do to setup your environment for terraform, and without any guidance it can be confusing.

But once you know what TerraForm is, the question becomes, how do I learn about / how to use it?

Outside of these, what I recommend is using the module registry, so one of the biggest strengths of TerraForm is a public module repository that allows you to see re-usable code written by others. I highly recommend this as a great way to see working code and play around with it. Here’s the public module registry.

So that’s a list of some resources to get you started on learning TerraForm, obviously there are also classes by PluralSight, Udemy, and Lynda. But I’ve not leveraged those, if you are a fan of structured class settings, those would be good places to start.