Where to I start – Service Fabric?

So containers have become an essential part of modern application development. I would go as far to say that containers and micro services have had a similar impact to software development as “Object Oriented Programming”.

Now that being that I have been talking to a lot of people who use Monolithic applications and are looking for a way to break down their existing applications into a micro service approach and support the idea of using existing infrastructure, and don’t necessarily want to deploy on Linux for a variety of reasons.

Now based on that option, there is an established technology that can leverage your docker containers and orchestrate them in a windows environment. And that is Service Fabric.

I find the learning curve if you are looking at a monolithic application and breaking it into micro services is a lot easier to swallow with Service Fabric, and it does help you to break up your applications to make better use compute on your machines in the cluster and you can still leverage docker.

Below are some links to help you get started with Service Fabric if you are looking for information on this technology:

Concepts and Architecture:

Service Fabric Overview:

Coding Samples:

Videos:

TerraForm Kubernetes Template

Hello All, I wanted to get a quick blog post out here based on something that I worked on, and finally is seeing the light of day.  I’ve been doing a lot of work with TerraForm, and one of the use cases I found was standing up a Kubernetes cluster.  And specifically I’ve been working with Azure Government, which does not have AKS available.  So how can I build a kubernetes cluster and minimize the lift of creating a cluster and then make it easy to add nodes to the cluster.  So the end result of that goal is here.

Below is a description of the project, and if you’d like to contribute please do, I have some ideas for phase 2 of this that I’m going to build out but I’d love to see what others come up with.

Intention:

The purpose of this template is to provide an easy-to-use approach to using an Infrastructure-as-a-service deployment to deploy a kubernetes cluster on Microsoft Azure. The goal being that you can start fresh with a standardized approach and preconfigured master and worker nodes.

How it works?

This template create a master node, and as many worker nodes as you specify, and during creation will automatically execute the scripts required to join those nodes to the cluster. The benefit of this being that once your cluster is created, all that is required to add additional nodes is to increase the count of the “lkwn” vm type, and reapply the template. This will cause the newe VMs to be created and the cluster will start orchestrating them automatically.

This template can also be built into a CI/CD pipeline to automatically provision the kubernetes cluster prior to pushing pods to it.

This guide is designed to help you navigate the use of this template to standup and manage the infrastructure required by a kubernetes cluster on azure. You will find the following documentation to assist:

  • Configure Terraform Development Environment: This page provides details on how to setup your locale machine to leverage this template and do development using Terraform, Packer, and VSCode.
  • Use this template: This document walks you through how to leverage this template to build out your kubernetes environment.
  • Understanding the template: This page describs how to understand the Terraform Template being used and walks you through its structure.

Key Contributors!

A special thanks to the following people who contributed to this template:
Brandon Rohrer: who introduced me to this template structure and how it works, as well as assisted with optimizing the functionality provided by this template.

Distributed Computing and Architecture Patterns

So lately I’ve been doing a lot of work on distributed programming, and specifically looking less at projects that are living on-premise, but need to be moved to cloud, and more with projects that are born in the cloud and how to optimize.  What I’m talking about here is applicable for the “lift-and-shift” type of project.

Ultimately the “cloud” is just like any other development projects, there are considerations that need to be handled as part of leveraging the environment to the best possible outcome.  So there are things you can do to help make your applications perform to their peak in the cloud.

In the traditional “Monolithic” approach to designing applications, we would work ourselves into a corner or more less.  And what I mean by that is we would build out applications to consume servers and predetermined resources, and that meant that if you wanted to take that application and sell it, traditionally you were looking at a large capital expense.  More than that also, if you wanted to increase scale, guess what…another capital expense, and this time with all the time required for a corporate purchase of that size.

Distributed Computing attempts to solve that problem, by enabling us to take that monolithic application and break it into the smallest parts we can.  And then making each of those parts independently scalable, to meet need.  So instead of one big app, we have a “web” of smaller pieces doing different jobs, and the total is more than the sum of its parts.

The value add here, is by leveraging smaller more isolated components, we can really focus on what does the best job.  For example, you might have a dotnet application, but if Python is the best fit for a microservice, why would you handicap yourself and not use the best tool for the job.  Microservices allow you to do that.

Let’s start with things you should keep in mind when building distributed applications within your application:

  • Loosely-Coupled Components:  For a distributed solution to truly work, all the pieces need to be loosely coupled.  And this takes the form of creating “buffers” between these services.  These “buffers” normally take the form of messaging between services, you could use a service bus, or even just a queue to communicate between services.  But the idea being that the services don’t know anything about each other, they just know that one adds an item to a queue, and the other removes the item from the queue.  This allows them to function independently and allows for the value add of creating ability to deploy these components separately.
  • Handle Communication Appropriately:  Given what we just talked about, how your application is a series of interconnected smaller apps, its important take some time and think about how you will pass information back-and-forth between applications.  Given that the application components are subject to change (platform, technology, end points, networking, etc).  You need to remember that you need an abstraction layer between the different micro services to make sure that they can be separated in a way to provide the best overall support for keeping these as separately deploy-able pieces.
  • Build with Monitoring in Mind:  Also, given that your application is really made of all these separate parts, its important to remember that for your application to work, every component must be functioning properly.  Just like an Olympic team can’t play unless every player is operating at the top of their game, your app can’t work if a component is unhealthy.  So when you set out to build micro service applications, make sure that you build and architecture your code with the logic in mind.
  • Build with Scale in Mind:  Given that your application is being built this way to encourage scaling, its important that you build your app in such as way that it can scale to meet the demands users are putting on the system.  Part of this comes down to making sure that your leveraging resources appropriately and not building systems that over (or under) consume the resources that you are using.
  • Build with Errors in mind:  Another item to consider is that your application is now a sum of “moving parts” and that being said, sometimes things can have errors or breakdowns that need to be handled.  These can be unplanned (exception or errors) or planned (upgrade of a service).  So your application should be able to respond to these “transient” faults, and not break down.  For example, one way is to leverage queues.  If component “A” is talking directly to component “B”, and “B” is in the middle of an upgrade, “A” might start throwing errors which will bubble up to the user, during the upgrade.  So now I need to notify users of downtime for the smallest change.  If I put a queue between them, then component “A” can continue to add elements to the queue, and while “B” is updating no errors occur.  When “B” finishes upgrading, it just starts pulling items off the queue as if nothing happened.  This is even less of a problem when you have scaling built into your app.

So the question is how do you do that?  There are a lot of ways to approach this particular problem, and ways to ensure that your app respects these.

I recommend the following steps:

  • Leverage a micro service approach:  There are a lot of articles out there (and some linked to below) that will talk you through how to build a micro service application, this can leverage a lot of technologies including “Service Fabric”, “Kubernetes”, “Docker Swarm”, and others to push your applications out with containers to support this approach.  You don’t have to use containers to do micro services, but they definitely help.
  • Always consider the best tool for the job:  One of the biggest benefits of micro services are the ability that you can leverage different stacks to solve different problems.  Don’t ignore this.
  • Leverage abstraction in communication between services:  As I mentioned above, this is paramount.  You must account for communication using a communication strategy, and a lot of times it helps to be consistent in how you approach this across your apps.  It will make your life simpler in the long run.
  • Make your services backward compatible:  As mentioned above, the benefits of abstraction are that I can push updates to individual components any time.  But to truly take advantage of this, I need to make my services backwards compatible.  Take my example above, imagine that service “A” writes to a queue, and then service “B” reads from that queue to do processing.  Now if Service “B” has scaled out to 10 nodes, and I try to update service “B”, I don’t want to shut them all down at once and take that part of the app offline, instead I want to do a rolling update.  So the idea being, that while service “B.1” is down, B.2-B.10 are still processing messages.  But in order to do that, I have to be careful how I change the signature of the queue and the changes to the database.  If i change the underlying database for service “B”, the database of service “B.2” has to be able to talk to it, even though its running old code.
  • Assume everything can change:  This is the best advice I can give, assume that anything can change around each Micro service that you build and by default you will be able to gracefully handle “transient faults” or “schema changes” without having to debug huge problems.
  • Leverage configuration management:  This is sort of a “1A” to the above entry, if you leverage configuration management, using services like redis, table storage, key vault or other platforms, you can make changes without requiring a redeploy of the application.  This makes your life much easier when you deploy a service and can change the configuration of other services.
  • There is no need to re-invent the wheel from an architectural standpoint, especially when you are learning.  If this is your first distributed project, lean on the known patterns and then take risks when you know more.

I hope this helps, as you start down this road.  What I will tell you, is that despite the changes I’m reviewing here.  Below are some addition links to help with this discussion:

Cloud Design Patterns :  This site provides you with detailed write-ups of some of the most common architecture patterns out there for Cloud.  These are especially helpful if you are currently more accustom to working an on-premise world, as they give a view of some of the considerations you should keep in mind.  I also like this because it pulls in concepts on things you might not be fully used to supporting (High availability vs disaster recovery, for example).

Architecture Center : Another great site, that contains just general architecture guidance for cloud or on-premise.  Its just a helpful site that lays out the pros / cons of common patterns so that you can design solutions appropriately to meet needs.

Architecting Distributed Applications:  A great online course that will walk you through what it means to build a truly distributed application, and this course is technology agnostic which is always a good thing.

Building Distributed Applications with Akka.net:  A great video with an overview of Akka.net which is a technology to help create applications using an Actor pattern.

Distributed Architecture with Microservices / Messaging:  A great video on Microservices which are the corner stone of distributed computing.

Rethinking Distributed Systems for Data Centers:  Another great article on how to build applications in a distributed world to accommodate a varying degree of scale.

Azure Compute – Searchable and Filterable

Hello All, so a good friend of mine, Brandon Rohrer and I just finished the first iteration of a project we’ve been working on recently.  Being Cloud Solution Architects, we get a lot of questions about the different compute options that are available in Azure.  And it occurred to us that there wasn’t a way to consume this information in a searchable, and filterable format to get the information customers need.

So we created this site:

https://computeinfo.azurewebsites.net
This site scrapes through the documentation provided by Microsoft and extracts the information about the different types of virtual machines you can create in azure and provides it in a way that meets the following criteria:

  • Searchable
  • Filterable
  • Viewable on a mobile device

Hope this helps as you look at your requirements in Azure and build out the appropriate architecture for your solution.