Data Drift Monitoring for Azure ML Datasets : Another timely video for a lot of people I know, how do we monitor data drift in our Azure ML models and work to keep them up to date. MLOps is becoming a thing as ML projects suffer from code “staleness” and that needs to be planned for early on.
Azure Service Bus from the Ground up: Service bus is one of those services in Azure that is foundational, its really been around forever and new enhancements are happening all the time. So its good to see how things are improving.
So for this week, its nothing in nerd pop culture, but a fascinating video I saw. This was recommended by a friend, and it sparked a lot of research for me around personal growth (blog post coming soon). But the video is “What does game theory teach us about war?” and there are a lot of parallels that you can draw to how your own life functions in terms of finite vs infinite games.
Hello all, in keeping with the last post on the blog, I started doing some posts around High Availability, so ultimately the focus here is how do I architect my solution to ensure that is meets the availability demands of my customers.
So odds are if you’ve started down this direction, you’ve heard 3 acronyms:
SLA – Service Level Agreement
RTO – Recovery Time Objective
RPO – Recovery Point Objective
So what do each of these items mean, and how do they relate to your solution. For SLA, I covered this pretty extensively in my previous post. So I would direct you there for a definition and then recommendations around how to approach that topic.
So the next question is really what are RTO and RPO? And how do they relate to High availability?
What is RTO?
RTO stands for Recovery Time Objective, and basically, in software terms, this refers to when something happens, how fast do you recover?
So let’s take an example because I work best with examples. So if I have a solution that is deployed in multiple regions, and my solution uses Traffic Manager and has replication of the solution into another region. If the Traffic manager is checking the endpoint every 5 seconds, and 3 failures cause a failover…that means my RTO is 15 seconds.
By using a dual region deployment, I’m able to keep my RTO relatively low. Now the above example is pretty simplistic. But really we should do this analysis per service in our architecture, to determine how long our failover takes, and then the longest of that is your solutions RTO.
How do we improve RTO?
Now, remember that this is really a measure of continuity of business, so really looking at High Availability and Disaster Recovery. So ultimately we are talking about service uptime more than anything else.
So the best way to improve RTO is to enable the replication and take steps to increase the speed of recovery. So if you look at the last discussion of SLA, we took steps to minimize downtime by increasing SLA. This conversation will be about how do we minimize the downtime caused by those failovers.
The most important things involved in this are the following:
So the key metric to pay attention to is how long it takes to get up and running.
Monitoring is the cornerstone of your RTO target. If you don’t know there is a problem, you can’t find it. Many blogs and articles will focus on the next 3 parts, but let’s be honest, if you don’t know there’s a problem, you can’t respond. If your logs operate on a 5-minute delay, then you need to factor in the 5 minutes into your RTO.
From there the next piece is response time. And I mean this in the true sense of how quickly can you trigger a failover to your DR state. How quickly can you triage the problem and respond to the situation? The best RTO targets leverage as much automation as possible here.
Next, by looking at data replication, we can ensure that we are able to bring back up any data stores quickly and maintain continuity of business. This is important because every time we have to restore a data store, that takes time and pulls out our RTO. If you can failover in 2 minutes it doesn’t do you much good if it takes 20 minutes to get the database up.
Finally, failover. If you are in a state where you need to failover, how long does that take and what automation and steps can you take to shorten that time significantly.
Let’s give an example if I have a solution that is the following in one region:
Azure App Service
If I’m deployed in a single environment, and my DR plan is to standup another region in the event of a disaster. Now that solution has a pretty high RTO, if it takes 15 minutes to standup that environment and deploy it, then the RTO is 15 minutes. If I wanted to lower that, there a couple of things I can and those would be:
I can increase the automation I use to reduce that time.
I can do is spin up another region, or leverage options to do replication.
I can set up automation around detection and response.
What is RPO?
RPO stands for Recovery Point Objective, which really focuses on the idea of improving the ability to recover from a data perspective. So if you have a disaster, how much data would be lost? What would the impact be?
When looking at RPO, the key comes to data and potential data loss. So how do we minimize the window for data loss and lower the chances of lost transactions in your application?
There are a few key elements that can assist with this, looking at how your application handles eventual consistency. It is possible to get to an RPO of 0, as you have constant data replication in your solution.
Now the most important part of the replication is that the replication needs to be executed in a synchronous fashion, meaning that it must write and replicate the data before sending an acknowledgment. This means that eventual consistency will keep your RPO higher than zero because it means that the replication will “eventually” get there.
How do we improve RPO?
The most important factor here is replication and data consistency. So we really need to make sure that the strength of transactions is maintained about that consistency rules are enforced. This is why data stores like Cosmos gain popularity in terms of requirements for zero RPO and low RTO because it supports models where they can enforce this type of logic.
Needless to say, this all comes down to operations and math and ultimately the requirements of your solution and balancing that against cost and impact. You really want to make sure you only take this to the level you need to as it can add a lot of cost and substantially raise the complexity of your solution.
Hello all, It’s been a while since I did a blog post outside of the weekly updates. But I wanted to do one in terms of conversations that I’ve been having a lot lately and seems to be largely universal. High Availability. So more and more, software is becoming a critical part of every aspect of our lives. To that end, we really see as developers / engineers, the following scenarios have become a constant reality:
For end customer software, not having access for an extend timeframe to an app or service can be the final nail in the coffin for a lot of users. Their tolerance for down time continues to drop. If you don’t believe me, research the metrics around how long someone will wait for a video to load before leaving according to YouTube.
For enterprises, organizations are becoming more and more reliant on software to function at the most basic level, meaning that outages or downtime windows have an even greater impact on their business, causing more parts of the organization to have to function at a diminished capacity or not at all during an outage.
The end result of these perceptions / realities is that the demands put on software solutions for maintaining availability are going higher and higher. And it becomes important to architect and plan for high availability to start with, as if you don’t it can be very expensive and difficult to retro-fit your applications to meet these demands.
This is a huge topic, and one that I’m not going to be able to cover in one blog post, but I’m hoping that we can identify ways to help if you are being tasked with meeting these demands.
So the first part of this conversation, always in my experience starts the same, “What’s our SLA?”, so let’s talk through what an SLA is? SLA stands for Service Level Agreement, and this is a legal agreement of what level of service you are required to provide.
Now the key part of that, is a “legal agreement”, this is not strictly a software function or engineering concept, but a business agreement in the sense that if an SLA is not met, there is a financial obligation from the organization to compensate the customer (in an enterprise setting).
So the most common mistake I hear when someone starts down this road is “we need 100% SLA”, which is a bad place to start this process. Realistically this is almost impossible, the idea that you will never have an outage is extreme. And to get this level of resiliency you can expect to pay for it, and its easy to get upside down on your costs by starting out here. And really mean need to be realistic about the ask here.
Let’s walkthrough an example, let’s say you have a software the provides grant processing for a municipality, and that grant reviews are done monday to friday during business hours (8-6pm). If your customer says “We need a 100% SLA”, I would make the counter argument of “Do you really?” If the system is down from 1-2am on a saturday, does that really affect you and the nature of the business? Or is this just a matter of needing the solution to be up during those core business operating hours?
Conversely let’s go the other way, and say that you are providing a solution that provides emergency service communication in terms of a natural disaster? Would your customer be ok with a 5-minute downtime at 2am in the middle of a hurricane? Probably not. So tolerance should be measured in terms of actual impact to the end user and ability to function.
High Availability is like insurance, I can get add-ons to my policy for everything that could ever happen, but that means that I will likely be paying for things I don’t need. I can get volcano insurance in Pennsylvania, but the odds of needing it are so low to make it ridiculous.
So what we should be doing is finding a happy balance between what we can realistically do, and do by following recommended processes, and way the business calculation, and cost.
Let me give you a high level example, let’s say I deploy my production environment to one region, and I’ve calculated that the composite SLA (more on this later) to be 99.9% for one region. That means that right now I am telling my customers that I am expecting about 43.2 minutes of downtime a month.
But if I stood up a secondary region, and built out a lot of automation around failover and monitoring (lets say 80 hours of work), I could raise that SLA from 99.9% to 99.99% which would mean a downtime of 4.32 minutes.
Now what I need to weigh is the following:
80 hours worth of labor costs
opportunity cost of not using that labor resource on new features
doubling my environment costs (2 active regions)
Potential advantage by supporting a higher SLA.
And I look at that and say, I’m saving 38.88 minutes of downtime in the process. So the question is, does that help my business and make sense from a financial position, or am I “ok” taking a financial hit and having only 1 environment up, and paying out if we are down for more than the 99.99% and rolling the dice on that.
I can’t say in the above discussion what the right answer is, because ultimately it depends on the type of business and resiliency of the application. You might be comfortable with that, you might not.
My point is that at the end of the day this is both an engineering problem and a business problem, and likely the right answer is somewhere in the middle.
Now to be clear, other times, especially in enterprise software, the customer may require a certain SLA, and at that point you might have to show that you meet that SLA by having specific redundancies in place. I’ll talk about this more in our next section.
Calculating a composite SLA
Another common area of question, is “How do I calculate the SLA of my service?” And this is more straight forward than people realize. Let’s take the following example:
Now the above is an estimate, but it would be around that time that we could expect to be our monthly downtime. This calculation doesn’t change the more services you add.
Now its important to note, this is the platform SLA, not your SLA. And I say that because at the end of the day, this is assuming that your application doesn’t have issues that cause downtime, so that should be considered as well.
How do we improve our SLA, start with “what is down?”
Now for many cloud services, Microsoft and every other cloud provider gives recommendations to enhance resiliency and improve your SLA. One way to do that is to leverage items like Availability Zones and multi-region deployments. This allows you to spread out your application across multi-regions and it makes the probability of an outage drop substantially.
Really the first step here is to do a failure mode analysis, and determination of critical functionality. And what I mean by that is we need to define what constitutes the system being “Down”. So let’s take for instance you have an eCommerce platform, something like NopCommerce, and you have the following use-cases:
Browse the catalog
Add items to shopping cart
Send out notifications of deals / sales
Now based on the above, we could identify 1,2,3, and 5 as mission critical, if we can’t allow our customers to shop, buy, and receive their products, that means that we are out of business. If we can’t publish a blog when we want to, or if a sale notice goes out a little late, its not ideal, but its not the end of the world. And let’s say that we have azure functions sending the notifications, and the blogs and promotions are managed by Cosmos DB.
So now based on that, we need to examine our architecture and identify what components are required to maintain the 4 key uses cases we identified. Notice I left off the elements that are not part of our key functionality for our SLA.
Let’s say we have the proposed architecture:
Now based on the above, I can calculate our primary region SLA to be:
So as a result of the above, we need to examine what elements of our solution are critical to the meeting our uptime SLA, and then doing a failure analysis. So based on the above use-cases, we can assume that the Traffic Manager, Application Gateway, App Service, and Azure SQL are essential to our meeting of our SLA. For the sake of this example, let’s say that the caching layer meets with industry recommendations and is used only for speed of access, if not available the application will just reach out to the database.
So how do we calculate the compound SLA for the two regions, we do that with the following math:
We basically have to figure out the probability of both regions being offline, so if we take the region “unavailability” of .12% and multiply it by one another:
0.12% * 0.12% = 0.0121%
Convert it back to availability:
100 % – 0.0121% = 99.99%
Now we take that multiplied by traffic manager SLA:
.9999 * .9999 = 99.99%
Failure Mode Analysis:
A failure analysis means that we pick apart each element of the infrastructure and identify the following:
What potential failures could occur?
What are the different “modes” or “states” can this component be in?
How likely is a failure of this component?
What is the impact of each failure “mode” or “state” on the application?
After examining the above, you need to look at each of the “modes” or “states” and identify the following:
How you will respond and recover?
How you will monitor for this situation, before, during, and after?
So let’s take an example, because to me that always helps. If we examine the above solution, and say Azure SQL Database. If I were to do a failure mode analysis, I would find the following:
The database is offline in the following situations:
The database can be offline due to a platform issue
the database is shutdown
the database is deleted
The database is in a degraded state in the following situations:
Database is performing slowly due to high website demand.
Database is running slowly due to bad query optimization
Database is experiencing deadlocks
Now this is by no means an exhaustive list, but it hits the high points for our ecommerce site. Now in those states, I need to identify what do for these scenarios. So the question is how do we respond and recover. In the case of the database, the most common recommendations are, to use a standard tier, and to use active geo-replication.
So for “How do we respond, recover?” I would say we setup active geo-replication of our production database to a secondary region. In the event the database is “offline” we fail-over to a secondary region and leverage traffic manager to route to the backup site. We would see some data loss during the failover, but for this exercise, let’s say that is manageable.
The next question is the most important, how do we monitor for this? The answer is we could do this a couple of ways:
Setup alerts via azure monitor around specific metrics.
Setup alerts in Application Insights for Dependency failures for database calls.
Build a page within our application that Traffic Manager can prob to identify when the database is unreachable and trigger failover.
The next mode was “degraded” and if we examine that the response is to increase the performance tier of the database to respond to increased demand, or do more in-depth analysis around the performance of the database. Again the monitoring would be similar of setting up alerts around these conditions to make appropriate staff aware.
So all kidding aside, this is a huge topic, and one I want to boil down more on how best to implement these solutions. This post didn’t begin to discuss the differences between RTO / RPO, or how you make sure to ensure resiliency through transient fault tolerance or distributed architectures, and that’s just scratching the surface, so more to come.