RT? – Making Sense of High Availability

Hello all, in keeping with the last post on the blog, I started doing some posts around High Availability, so ultimately the focus here is how do I architect my solution to ensure that is meets the availability demands of my customers.

See the source image

So odds are if you’ve started down this direction, you’ve heard 3 acronyms:

  • SLA – Service Level Agreement
  • RTO – Recovery Time Objective
  • RPO – Recovery Point Objective

So what do each of these items mean, and how do they relate to your solution. For SLA, I covered this pretty extensively in my previous post. So I would direct you there for a definition and then recommendations around how to approach that topic.

So the next question is really what are RTO and RPO? And how do they relate to High availability?

What is RTO?

RTO stands for Recovery Time Objective, and basically, in software terms, this refers to when something happens, how fast do you recover?

So let’s take an example because I work best with examples. So if I have a solution that is deployed in multiple regions, and my solution uses Traffic Manager and has replication of the solution into another region. If the Traffic manager is checking the endpoint every 5 seconds, and 3 failures cause a failover…that means my RTO is 15 seconds.

By using a dual region deployment, I’m able to keep my RTO relatively low. Now the above example is pretty simplistic. But really we should do this analysis per service in our architecture, to determine how long our failover takes, and then the longest of that is your solutions RTO.

How do we improve RTO?

Now, remember that this is really a measure of continuity of business, so really looking at High Availability and Disaster Recovery. So ultimately we are talking about service uptime more than anything else.

So the best way to improve RTO is to enable the replication and take steps to increase the speed of recovery. So if you look at the last discussion of SLA, we took steps to minimize downtime by increasing SLA. This conversation will be about how do we minimize the downtime caused by those failovers.

The most important things involved in this are the following:

  • Monitoring
  • Response time
  • Data Replication
  • Failover

So the key metric to pay attention to is how long it takes to get up and running.

Monitoring is the cornerstone of your RTO target. If you don’t know there is a problem, you can’t find it. Many blogs and articles will focus on the next 3 parts, but let’s be honest, if you don’t know there’s a problem, you can’t respond. If your logs operate on a 5-minute delay, then you need to factor in the 5 minutes into your RTO.

From there the next piece is response time. And I mean this in the true sense of how quickly can you trigger a failover to your DR state. How quickly can you triage the problem and respond to the situation? The best RTO targets leverage as much automation as possible here.

Next, by looking at data replication, we can ensure that we are able to bring back up any data stores quickly and maintain continuity of business. This is important because every time we have to restore a data store, that takes time and pulls out our RTO. If you can failover in 2 minutes it doesn’t do you much good if it takes 20 minutes to get the database up.

Finally, failover. If you are in a state where you need to failover, how long does that take and what automation and steps can you take to shorten that time significantly.

Let’s give an example if I have a solution that is the following in one region:

  • Azure App Service
  • Azure SQL

If I’m deployed in a single environment, and my DR plan is to standup another region in the event of a disaster. Now that solution has a pretty high RTO, if it takes 15 minutes to standup that environment and deploy it, then the RTO is 15 minutes. If I wanted to lower that, there a couple of things I can and those would be:

  • I can increase the automation I use to reduce that time.
  • I can do is spin up another region, or leverage options to do replication.
  • I can set up automation around detection and response.

What is RPO?

RPO stands for Recovery Point Objective, which really focuses on the idea of improving the ability to recover from a data perspective. So if you have a disaster, how much data would be lost? What would the impact be?

When looking at RPO, the key comes to data and potential data loss. So how do we minimize the window for data loss and lower the chances of lost transactions in your application?

There are a few key elements that can assist with this, looking at how your application handles eventual consistency. It is possible to get to an RPO of 0, as you have constant data replication in your solution.

Now the most important part of the replication is that the replication needs to be executed in a synchronous fashion, meaning that it must write and replicate the data before sending an acknowledgment. This means that eventual consistency will keep your RPO higher than zero because it means that the replication will “eventually” get there.

How do we improve RPO?

The most important factor here is replication and data consistency. So we really need to make sure that the strength of transactions is maintained about that consistency rules are enforced. This is why data stores like Cosmos gain popularity in terms of requirements for zero RPO and low RTO because it supports models where they can enforce this type of logic.

https://mathequality.files.wordpress.com/2014/01/math-meme-math-test-easy-or-wrong.png

Needless to say, this all comes down to operations and math and ultimately the requirements of your solution and balancing that against cost and impact. You really want to make sure you only take this to the level you need to as it can add a lot of cost and substantially raise the complexity of your solution.

Keeping the lights on! – Architecting for availability?

Hello all, It’s been a while since I did a blog post outside of the weekly updates. But I wanted to do one in terms of conversations that I’ve been having a lot lately and seems to be largely universal. High Availability. So more and more, software is becoming a critical part of every aspect of our lives. To that end, we really see as developers / engineers, the following scenarios have become a constant reality:

  • For end customer software, not having access for an extend timeframe to an app or service can be the final nail in the coffin for a lot of users. Their tolerance for down time continues to drop. If you don’t believe me, research the metrics around how long someone will wait for a video to load before leaving according to YouTube.
  • For enterprises, organizations are becoming more and more reliant on software to function at the most basic level, meaning that outages or downtime windows have an even greater impact on their business, causing more parts of the organization to have to function at a diminished capacity or not at all during an outage.

The end result of these perceptions / realities is that the demands put on software solutions for maintaining availability are going higher and higher. And it becomes important to architect and plan for high availability to start with, as if you don’t it can be very expensive and difficult to retro-fit your applications to meet these demands.

This is a huge topic, and one that I’m not going to be able to cover in one blog post, but I’m hoping that we can identify ways to help if you are being tasked with meeting these demands.

Defining SLA

See the source image

So the first part of this conversation, always in my experience starts the same, “What’s our SLA?”, so let’s talk through what an SLA is? SLA stands for Service Level Agreement, and this is a legal agreement of what level of service you are required to provide.

Now the key part of that, is a “legal agreement”, this is not strictly a software function or engineering concept, but a business agreement in the sense that if an SLA is not met, there is a financial obligation from the organization to compensate the customer (in an enterprise setting).

Be Reasonable…

See the source image
Let’s not get crazy!

So the most common mistake I hear when someone starts down this road is “we need 100% SLA”, which is a bad place to start this process. Realistically this is almost impossible, the idea that you will never have an outage is extreme. And to get this level of resiliency you can expect to pay for it, and its easy to get upside down on your costs by starting out here. And really mean need to be realistic about the ask here.

Let’s walkthrough an example, let’s say you have a software the provides grant processing for a municipality, and that grant reviews are done monday to friday during business hours (8-6pm). If your customer says “We need a 100% SLA”, I would make the counter argument of “Do you really?” If the system is down from 1-2am on a saturday, does that really affect you and the nature of the business? Or is this just a matter of needing the solution to be up during those core business operating hours?

Conversely let’s go the other way, and say that you are providing a solution that provides emergency service communication in terms of a natural disaster? Would your customer be ok with a 5-minute downtime at 2am in the middle of a hurricane? Probably not. So tolerance should be measured in terms of actual impact to the end user and ability to function.

High Availability is like insurance, I can get add-ons to my policy for everything that could ever happen, but that means that I will likely be paying for things I don’t need. I can get volcano insurance in Pennsylvania, but the odds of needing it are so low to make it ridiculous.

So what we should be doing is finding a happy balance between what we can realistically do, and do by following recommended processes, and way the business calculation, and cost.

Let me give you a high level example, let’s say I deploy my production environment to one region, and I’ve calculated that the composite SLA (more on this later) to be 99.9% for one region. That means that right now I am telling my customers that I am expecting about 43.2 minutes of downtime a month.

But if I stood up a secondary region, and built out a lot of automation around failover and monitoring (lets say 80 hours of work), I could raise that SLA from 99.9% to 99.99% which would mean a downtime of 4.32 minutes.

Now what I need to weigh is the following:

  • 80 hours worth of labor costs
  • opportunity cost of not using that labor resource on new features
  • doubling my environment costs (2 active regions)
  • Potential advantage by supporting a higher SLA.

And I look at that and say, I’m saving 38.88 minutes of downtime in the process. So the question is, does that help my business and make sense from a financial position, or am I “ok” taking a financial hit and having only 1 environment up, and paying out if we are down for more than the 99.99% and rolling the dice on that.

I can’t say in the above discussion what the right answer is, because ultimately it depends on the type of business and resiliency of the application. You might be comfortable with that, you might not.

My point is that at the end of the day this is both an engineering problem and a business problem, and likely the right answer is somewhere in the middle.

Now to be clear, other times, especially in enterprise software, the customer may require a certain SLA, and at that point you might have to show that you meet that SLA by having specific redundancies in place. I’ll talk about this more in our next section.

Calculating a composite SLA

See the source image

Another common area of question, is “How do I calculate the SLA of my service?” And this is more straight forward than people realize. Let’s take the following example:

Note: You can find all of azure’s SLAs here.

ServiceSLA
App Service99.95%
Azure SQL99.99%

So based on the table above, the composite SLA would be:

.9995 * .9999 = .9994 = 99.94%

So that would imply that your cloud provider is standing behind these service to have downtime of :

730 (Hours per month) * (1 – .9994) = 26.28 minutes

Now the above is an estimate, but it would be around that time that we could expect to be our monthly downtime. This calculation doesn’t change the more services you add.

Now its important to note, this is the platform SLA, not your SLA. And I say that because at the end of the day, this is assuming that your application doesn’t have issues that cause downtime, so that should be considered as well.

How do we improve our SLA, start with “what is down?”

See the source image

Now for many cloud services, Microsoft and every other cloud provider gives recommendations to enhance resiliency and improve your SLA. One way to do that is to leverage items like Availability Zones and multi-region deployments. This allows you to spread out your application across multi-regions and it makes the probability of an outage drop substantially.

Really the first step here is to do a failure mode analysis, and determination of critical functionality. And what I mean by that is we need to define what constitutes the system being “Down”. So let’s take for instance you have an eCommerce platform, something like NopCommerce, and you have the following use-cases:

  1. Browse the catalog
  2. Add items to shopping cart
  3. Purchase items
  4. Publish blogs
  5. Send out notifications of deals / sales
  6. Process Orders

Now based on the above, we could identify 1,2,3, and 5 as mission critical, if we can’t allow our customers to shop, buy, and receive their products, that means that we are out of business. If we can’t publish a blog when we want to, or if a sale notice goes out a little late, its not ideal, but its not the end of the world. And let’s say that we have azure functions sending the notifications, and the blogs and promotions are managed by Cosmos DB.

So now based on that, we need to examine our architecture and identify what components are required to maintain the 4 key uses cases we identified. Notice I left off the elements that are not part of our key functionality for our SLA.

Let’s say we have the proposed architecture:

Now based on the above, I can calculate our primary region SLA to be:

ServiceSLA
Application Gateway99.95%
App Service99.95%
Azure SQL99.99%
Total SLA99.89%

So as a result of the above, we need to examine what elements of our solution are critical to the meeting our uptime SLA, and then doing a failure analysis. So based on the above use-cases, we can assume that the Traffic Manager, Application Gateway, App Service, and Azure SQL are essential to our meeting of our SLA. For the sake of this example, let’s say that the caching layer meets with industry recommendations and is used only for speed of access, if not available the application will just reach out to the database.

So how do we calculate the compound SLA for the two regions, we do that with the following math:

We basically have to figure out the probability of both regions being offline, so if we take the region “unavailability” of .12% and multiply it by one another:

0.12% * 0.12% = 0.0121%

Convert it back to availability:

100 % – 0.0121% = 99.99%

Now we take that multiplied by traffic manager SLA:

.9999 * .9999 = 99.99%

Failure Mode Analysis:

See the source image

A failure analysis means that we pick apart each element of the infrastructure and identify the following:

  • What potential failures could occur?
  • What are the different “modes” or “states” can this component be in?
  • How likely is a failure of this component?
  • What is the impact of each failure “mode” or “state” on the application?

After examining the above, you need to look at each of the “modes” or “states” and identify the following:

  • How you will respond and recover?
  • How you will monitor for this situation, before, during, and after?

So let’s take an example, because to me that always helps. If we examine the above solution, and say Azure SQL Database. If I were to do a failure mode analysis, I would find the following:

  • The database is offline in the following situations:
    • The database can be offline due to a platform issue
    • the database is shutdown
    • the database is deleted
  • The database is in a degraded state in the following situations:
    • Database is performing slowly due to high website demand.
    • Database is running slowly due to bad query optimization
    • Database is experiencing deadlocks

Now this is by no means an exhaustive list, but it hits the high points for our ecommerce site. Now in those states, I need to identify what do for these scenarios. So the question is how do we respond and recover. In the case of the database, the most common recommendations are, to use a standard tier, and to use active geo-replication.

So for “How do we respond, recover?” I would say we setup active geo-replication of our production database to a secondary region. In the event the database is “offline” we fail-over to a secondary region and leverage traffic manager to route to the backup site. We would see some data loss during the failover, but for this exercise, let’s say that is manageable.

The next question is the most important, how do we monitor for this? The answer is we could do this a couple of ways:

  • Setup alerts via azure monitor around specific metrics.
  • Setup alerts in Application Insights for Dependency failures for database calls.
  • Build a page within our application that Traffic Manager can prob to identify when the database is unreachable and trigger failover.

The next mode was “degraded” and if we examine that the response is to increase the performance tier of the database to respond to increased demand, or do more in-depth analysis around the performance of the database. Again the monitoring would be similar of setting up alerts around these conditions to make appropriate staff aware.

So all kidding aside, this is a huge topic, and one I want to boil down more on how best to implement these solutions. This post didn’t begin to discuss the differences between RTO / RPO, or how you make sure to ensure resiliency through transient fault tolerance or distributed architectures, and that’s just scratching the surface, so more to come.

Leveraging Azure Search with Python

So lately I’ve been working on a side project, to showcase some of the capabilities in Azure with regard to PaaS services, and the one I’ve become the most engaged with is Azure Search.

So let’s start with the obvious question, what is Azure Search? Azure Search is a Platform-as-a-Service offering that allows for implementing search as part of your cloud solution in a scalable manner.

Below are some links on the basics of “What is Azure Search?”

The first part is how to create a search service, and really I find the easiest way is to create it via CLI:

az search service create --name {name} --resource-group {group} --location {location}

So after you create an Azure Search Service, the next part is to create all the pieces needed. For this, I’ve been doing work with the REST API via Python to manage these elements, so you will see that code here.

  • Create the data source
  • Create an index
  • Create an indexer
  • Run the Indexer
  • Get the indexer status
  • Run the Search

Project Description:

For this post, I’m building a search index that crawls through the data compiled from the Chicago Data Portal, which makes statistics and public information available via their API. This solution is pulling in data from that API into cosmos db to make that information searchable. I am using only publicly consumable information as part of this. The information on the portal can be found here.

Create the Data Source

So, the first part of any search discussion, is that you need to have a data source that you can search. Can’t get far without that. So the question becomes, what do you want to search. Azure Search supports a wide variety of data sources, and for the purposes of this discussion, I am pointing it at Cosmos Db. The intention is to search the contents of a cosmos db to ensure that I can pull back relevant entries.

Below is the code that I used to create the data source for the search:

import json
import requests
from pprint import pprint

#The url of your search service
url = 'https://[Url of the search service]/datasources?api-version=2017-11-11'
print(url)

#The API Key for your search service
api_key = '[api key for the search service]'


headers = {
    'Content-Type': 'application/json',
    'api-key': api_key
}

data = {
    'name': 'cosmos-crime',
    'type': 'documentdb',
    'credentials': {'connectionString': '[connection string for cosmos db]'},
    'container': {'name': '[collection name]'}
}

data = json.dumps(data)
print(type(data))

response = requests.post(url, data=data, headers=headers)
pprint(response.status_code)

To get the API key, you need the management key which can be found with the following command:

az search admin-key show --service-name [name of the service] -g [name of the resource group]

After running the above you will have created a data source to connect to for searching.

Create an Index

Once you have the above datasource, the next step is to create an index. This index is what Azure Search will map your data to, and how it will actually perform searches in the future. So ultimately think of this as the format your search will be in after completion. To create the index, use the following code:

import json
import requests
from pprint import pprint

url = 'https://[Url of the search service]/indexes?api-version=2017-11-11'
print(url)

api_key = '[api key for the search service]'

headers = {
    'Content-Type': 'application/json',
    'api-key': api_key
}

data = {
     "name": "crimes",  
     "fields": [
       {"name": "id", "type": "Edm.String", "key":"true", "searchable": "false"},
       {"name": "iucr","type": "Edm.String", "searchable":"true", "filterable":"true", "facetable":"true"},
       {"name": "location_description","type":"Edm.String", "searchable":"true", "filterable":"true"},
       {"name": "primary_description","type":"Edm.String","searchable":"true","filterable":"true"},
       {"name": "secondary_description","type":"Edm.String","searchable":"true","filterable":"true"},
       {"name": "arrest","type":"Edm.String","facetable":"true","filterable":"true"},
       {"name": "beat","type":"Edm.Double","filterable":"true","facetable":"true"},
       {"name": "block", "type":"Edm.String","filterable":"true","searchable":"true","facetable":"true"},
       {"name": "case","type":"Edm.String","searchable":"true"},
       {"name": "date_occurrence","type":"Edm.DateTimeOffset","filterable":"true"},
       {"name": "domestic","type":"Edm.String","filterable":"true","facetable":"true"},
       {"name": "fbi_cd", "type":"Edm.String","filterable":"true"},
       {"name": "ward","type":"Edm.Double", "filterable":"true","facetable":"true"},
       {"name": "location","type":"Edm.GeographyPoint"}
      ]
     }

data = json.dumps(data)
print(type(data))

response = requests.post(url, data=data, headers=headers)
pprint(response.status_code)

Using the above code, I’ve identified the different data types of the final product, and these all map to the data types identified for azure search. The supported data types can be found here.

Its worth mentioning, that there are other key attributes above to consider:

  • facetable: This denotes if this data is able to be faceted. For example, in Yelp if I bring back a search for cost, all restuarants have a “$” to “$$$$$” rating, and I want to be able to group results based on this facet.
  • filterable: This denotes if the dataset can be filtered based on those values.
  • searchable: This denotes whether or not the field is having a full-text search performed on it, and is limited in the different types of data that can used to perform the search.

Creating an indexer

So the next step is to create the indexer. The purpose of the indexer is that this does the real work. The indexer is responsible for performing the following operations:

  • Connect to the data source
  • Pull in the data and put it into the appropriate format for the index
  • Perform any data transformations
  • Manage pulling in no data ongoing
import json
import requests
from pprint import pprint

url = 'https://[Url of the search service]/indexers?api-version=2017-11-11'
print(url)

api_key = '[api key for the search service]'

headers = {
    'Content-Type': 'application/json',
    'api-key': api_key
}

data = {
    "name": "cosmos-crime-indexer",
    "dataSourceName": "cosmos-crime",
    "targetIndexName": "crimes",
    "schedule": {"interval": "PT2H"},
    "fieldMappings": [
        {"sourceFieldName": "iucr", "targetFieldName": "iucr"},
        {"sourceFieldName": "location_description", "targetFieldName": "location_description"},
        {"sourceFieldName": "primary_decsription", "targetFieldName": "primary_description"},
        {"sourceFieldName": "secondary_description", "targetFieldName": "secondary_description"},
        {"sourceFieldName": "arrest", "targetFieldName": "arrest"},
        {"sourceFieldName": "beat", "targetFieldName": "beat"},
        {"sourceFieldName": "block", "targetFieldName": "block"},
        {"sourceFieldName": "casenumber", "targetFieldName": "case"},
        {"sourceFieldName": "date_of_occurrence", "targetFieldName": "date_occurrence"},
        {"sourceFieldName": "domestic", "targetFieldName": "domestic"},
        {"sourceFieldName": "fbi_cd", "targetFieldName": "fbi_cd"},
        {"sourceFieldName": "ward", "targetFieldName": "ward"},
        {"sourceFieldName": "location", "targetFieldName":"location"}
    ]
}

data = json.dumps(data)
print(type(data))

response = requests.post(url, data=data, headers=headers)
pprint(response.status_code)

What you will notice is that for each field, two attributes are assigned:

  • targetFieldName: This is the field in the index that you are targeting.
  • sourceFieldName: This is the field name according to the data source.

Run the indexer

Once you’ve created the indexer, the next step is to run it. This will cause indexer to pull data into the index:

import json
import requests
from pprint import pprint

url = 'https://[Url of the search service]/indexers/cosmos-crime-indexer/run/?api-version=2017-11-11'
print(url)

api_key = '[api key for the search service]'

headers = {
    'Content-Type': 'application/json',
    'api-key': api_key
}

reseturl = 'https://[Url of the search service]/indexers/cosmos-crime-indexer/reset/?api-version=2017-11-11'

resetResponse = requests.post(reseturl, headers=headers)

response = requests.post(url, headers=headers)
pprint(response.status_code)

By triggering the “running” the indexer which will load the index.

Getting the indexer status

Now, depending the size of your data source, this indexing process could take time, so I wanted to provide a rest call that will let you get the status of the indexer.

import json
import requests
from pprint import pprint

url = 'https://[Url of the search service]/indexers/cosmos-crime-indexer/status/?api-version=2017-11-11'
print(url)

api_key = '[api key for the search service]'

headers = {
    'Content-Type': 'application/json',
    'api-key': api_key
}

response = requests.get(url, headers=headers)
index_list = response.json()
pprint(index_list)

This will provide you with the status of the indexer, so that you can find out when it completes.

Run the search

Finally if you want to confirm the search is working afterward, you can do the following:

import json
import requests
from pprint import pprint

url = 'https://[Url of the search service]/indexes/crimes/docs?api-version=2017-11-11'
print(url)

api_key = '[api key for the search service]'

headers = {
    'Content-Type': 'application/json',
    'api-key': api_key
}

response = requests.get(url, headers=headers)
index_list = response.json()
pprint(index_list)

This will bring back the results of the search. This will bring back everything as it is an empty string search.

I hope this helps with your configuring of Azure Search, Happy searching :)!

Book Review – Multipliers

So I have another book review, and honestly I’ve found myself traveling so there will be probably quite a few of these as I continue to have time to kill on a plane. The latest book I just finished was “Multipliers” by Liz Weisman and Greg McKeown.

Now I’ve read a book by Greg McKeown before, Essentialism, and found it to be really an excellent and thought provoking read. It caused me to re-examine a lot of the ways I’d approached things in my life. So I was very excited to finally get around to reading the book he wrote before with Lizi Weisman.

This book takes the position that their are two types of leaders in this world, Multipliers and Diminishers. The earlier being the type of leader that causes their teams to aspire to great heights, and to rise to meet an impossible challenge. The later being the type of leader who crushes the spirit of the people they lead, causing them to deliver less and less.

I found this book to be rather insightful and interesting, as it made me question the type of leader that I want to be. Now that being said, I do feel like to say that all leaders fall into one of two buckets, is a bit of a falsehood. I believe all leaders have elements of both diminishers and multipliers in their approach as no one is perfect.

But what I found in this book is that the examples are pretty dramatic, and in that regard its easy to say “I’m not that bad”. But I found it eye opening in that it made me re-evaluate how I approach leadership. I think the focus of the book is on a “binary” nature of these two types of leaders, and to be honest I don’t find that I totally agree with that assessment.

After reading this, I’m convinced, that in a way similar to the nature of introverted vs extroverted aren’t binary but a sliding scale, I believe the same can be said for multipliers and diminishers. Most of the leaders I’ve worked with are somewhere on that scale, but no one is perfect by any stretch of the imagination. But I don’t feel that comes across in the author’s description, and that could because of the reliance on dramatic examples.

The other element I found in this book, is that it does focus on what I call “Official leadership”, which is having an official title or position that puts you in a position of leadership. But in my experience, leadership includes people who are not in a position of authority but who act as leaders. In the beginning of the book it seems to exclusively focus on the earlier, and it is easy to say “this doesn’t apply to me” but I find that is not true.

Overall I found this to be pretty insightful book. Below is a talk from Liz Wiseman at a CEO summit about the content of the book.

Capturing Web Cam Pictures with .net

So I recently was working on a project where we needed to be able to have a web cam on a laptop take pictures, and then send those images against a web api endpoint.

Basically the use-cases behind this are plenty, but this was around work done to support using Microsoft Cognitive services. The project itself being a slimmed down version of the intelligent kiosk from Microsoft.

So I have to be honest, I expected this problem to be a lot harder than it actually is. There is a great library that made this work called AForge.Video, that I was able to install from nuget, and from there this is the code required:

static void Main(string[] args)
        {
            // enumerate video devices
            var videoDevices = new FilterInfoCollection(
                    FilterCategory.VideoInputDevice);
            // create video source
            VideoCaptureDevice videoSource = new VideoCaptureDevice(
                    videoDevices[0].MonikerString);
            // set NewFrame event handler
            videoSource.NewFrame += new NewFrameEventHandler(video_NewFrame);

            videoSource.ProvideSnapshots = true;

            // start the video source
            videoSource.Start();
            
            //videoSource.SignalToStop();


            Console.ReadLine();
        }

The above code is just identifying the video / photo capture devices available on this machine, and leveraging the first. And then connecting an event to handle new frame capture.

From there, once I turn on “videoSource.Start();” the application starts executing the NewFrameEventHandler to process it.

private static void video_NewFrame(object sender,
                    NewFrameEventArgs eventArgs)
        {
            // get new frame
            Bitmap bitmap = eventArgs.Frame;

            var fileName = @"C:\temp\camera\File_Frame.jpg";

            //bitmap.Save(string.Format(fileName));

            EncoderParameters encoderParameters = new EncoderParameters(1);
            encoderParameters.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 100L);
            bitmap.Save(fileName, GetEncoder(ImageFormat.Jpeg), encoderParameters);

            var bytes = bitmap.ToByteArray(ImageFormat.Bmp);

            Thread.Sleep(1000);
        }

public static ImageCodecInfo GetEncoder(ImageFormat format)
        {
            ImageCodecInfo[] codecs = ImageCodecInfo.GetImageDecoders();

            foreach (ImageCodecInfo codec in codecs)
            {
                if (codec.FormatID == format.Guid)
                {
                    return codec;
                }
            }

            return null;
        }

Now the event handler above will take the code, and extract the bitmp, convert it to a Jpg and save the file. But additionally I’ve added the logic at the end to convert it to a byte array. This would allow you to push this up to an HTTP endpoint for processing by any services you need. Pretty simple for 74 line of code.

How to run a meeting that actually gets something done.

So for this post I wanted to do something more around soft skills, and I’ve actually had a couple of people ask me to write something up about this. Running a meeting is not the easiest thing in the world, but there always seems to be this perception that everyone should know how to without any guidance or instruction.

As part of my day job, I’m a pre-sales resource, so that means I run and coordinate a lot of meetings with a wide variety of people, everything from tech talks for developers, to regular cadence check-ins, and business strategy sessions with executives. And over the years I’ve come up with some tips and tricks to ensure that those meetings are productive, efficient, and don’t waste anyone’s time. Here are some tips to help if you find yourself in a position of having to run meetings and want to make sure they are productive.

Tip #1 – Time is valuable

This is more of a guiding principle than a tip, and one that you should take to heart immediately, and it really is the foundation of everything else in this blog. Everyone is busy, all the time…we live in a connected world where multi-tasking is the new normal. If someone is having a meeting with you, they are giving you the two most important things they have, time and attention. You need to treat these as valuable resources to be utilized appropriately, and not something to waste. This means do the following:

  • Be on time – This is common sense, you will never recover from arriving late, it already convinces the people your meeting with that you don’t see their time as valuable.
  • End on time or EARLY – I know, blasphemy, but if I can end early, I always do. Your customers will thank you for this. I don’t rush meetings, but if we accomplish what we need to, just wrap up, no need to draw things out just because of the time block.
  • Make sure you have enough to justify the meeting: Not everything needs to be a meeting, sometime a phone call will do. Always better to do things via a phone call than email, but if one of these can replace a meeting, take that option.

Tip #2 – Begin with the end in mind

This goes to the points above about making sure you have enough to justify the meeting, and looking at how valuable both the people you are meeting with’s time is, and how valuable your time is. The first question you should be clear on is…”What do I hope to accomplish here?”

This goes to Steven Covey’s principle, “Begin with the end in mind.” If you can’t answer this question, don’t waste anyone’s time. But if you can, great, use that to structure the rest of the meeting and work backwards.

For example, if the goal is to get approval to embrace a new technology, start with the problems it solves. Give them a reason to care and then work backwards into what it takes to implement.

If the goal of the meeting is to understand the ramifications of an old technology start with the downsides of the status quo and work towards the solution.

Make sure you know what your goal is because this provides a key metric for success and you can then objectively measure when the meeting is over.

Tip #3 – Have an agenda out ahead of time

This is another facet of the above, never go into a meeting without an agenda, even if its informal. You need to know in your mind how the meeting will run, and keep things focused on the goals you identified above.

Whenever possible, send out that agenda to let the attendees know exactly what will be covered. This is important not just because they know how it will flow, but it can help your attendees to identify people who they should include to make the meeting productive.

Tip #4 – Don’t skip small talk

This is the most common mistake I’ve seen with people, they are too focused on the immediate. Small talk before the meeting begins is important, this is how you build a relationship and re pore with your customer. If you don’t take time to build the relationship and help them see you as a person, it will hurt your credibility in the long run.

Now its important to know when to cut this off, and keep things light, butt having small talk before a meeting helps to make people comfortable. The more you can get to know people and reference things they’ve said in future meetings drives home that you respect them and care about them as a person.

Tip #5 – Be respectful of their time

Start on time. Period. This is not hard people, do not start late if you can avoid it. This shows that you have no respect for their time which as I previously said is the most important thing they have to give.

Also, if it looks like you might run over, make sure to give them an out, something like “I want to be respectful of your time, and we have 3 minutes…” and start to wrap it up. If they want to go long, they will allow you too. But this gives them an out and shows you care.

Tip #6 – Do introductions

If its a larger meeting, make sure you encourage introductions, and not just you and your team, but make sure everyone on the call or at the table introduces themselves. This shows each person in the room you see them and care about them, and want to hear their voice. This helps to be inclusive in making sure everyone feels comfortable.

Also resist the urge to introduce other people, let them introduce themselves, and what I mean by this is say something like “and given this topic, I wanted to bring Claire to this conversation…Claire, can you introduce yourself.”

Tip #7 – Never leave without confirming actions

Always make sure at the end of the meeting that you summarize the action items, take 5 minutes at the end to say that “These are the items I heard that have follow-up involved…” and make sure you say a name of a person with each item to drive home who is responsible. Also ask the customer for confirmation. This makes sure each person is aware of actions and expectations before they leave. This will make it easier to engage after the meeting.

Tip #8 – Your agenda should not be iron clad…be flexible

Another common mistake I see a lot, is people get too attached to their agenda. They say well we are supposed to cover that last, so “you customer have to wait”. This is a mistake, as I said before its their time, so if they want to restructure things, you should allow it. Now I say this with a couple of rules:

  • It has to be on topic with the intent of the meeting.
  • There needs to be agreement from the team for the change of direction.
  • And all appropriate people need to be at the table.
  • If the order matters and you can address it very soon.

This is a fine line, but ultimately it goes back to tip 1, which is remember this is their time, and your agenda is not important than their time.

Tip #9 – Don’t get derailed

During any meeting, some times you get someone who will try to derail the meeting to meet their own needs, never dismiss these concerns but if you have to push them off as out of scope, but give them the validation around when you will address the topic. Something like “That’s a little out of scope, but see me after and we can address those concerns.”