Musings on Ethical AI for Business and resources to help

When I was a kid, one of my favorite movies was Jurassic Park, because well…dinosaurs. I remember the movie being such a phenomenon too that summer, there were shirts and toys everywhere. I even remember going to the community pool and seeing adults everywhere holding the book with the silver cover and the T-Rex skull on it.

It really was a movie ahead of its time, not just in terms of special effects, or how it covers the topic of cloning, but in that it described a societal nexus we were all headed towards that many people didn’t quite see yet. One of my favorite moments in the movie is when Jeff Goldblum’s character, having just survived a T-Rex attack deliers this line:

See the source image

Technology has grown, by leaps and bounds, to the point now that many argue Moore’s law is irrelevant and outdated. And we are making advances in everything major area of life to the point that the world we grew up in is completely unrecognizable to that of our children. Furthermore to the point that this question has become all the more relevant today, with regard to artificial intelligence.

Just to be clear these are the thoughts of one developer / architect (me) on this subject and I would recommend you research this heavily, and come to your own conclusions, but these are my opinions and mine alone.

We have reached a period of time where more and more businesses and society in general are looking to artificial intelligence as a potential solution to solve a lot of problems and more and more the question of AI ethics has become prevalent. But what does that actually mean and how can an organization build AI solutions that serve to benefit all of humanity rather than cause unintended problems and potentially harm members of society.

The first part of this comes down to the recognition that artificial intelligence solutions need to be fully baked and great care needs to be given to supporting the idea of mitigating built in bias in both training data and the end results of the service. Now the question is what do I mean about bias. And I mean actively searching for potentially bad assumptions that might find their way into a model based upon a training dataset. Let’s take a good hypothetical case that strikes close to home for me.

If you wanted to build a system to identify patients that were at high risk for pneumonia. This was a hypothetical I talked to a colleague about a few months ago. If you took training data of conditions they have and an indicator of whether or not they ended up getting pneumonia, this would seem like a logical way to tackle the problem.

But there are potential bias that could occur based on the fact that many asthmatics like myself tend to seek proactive treatment, as we are at high risk, and many doctors treat colds very aggressively. Mainly because when we get pneumonia it can be life threatening. So if you don’t account for this bias it might skew the results of any AI system. Because you likely won’t see many asthmatics appear in your training data that actually got pneumonia.

Or another potential consideration could be location, if I take my data sample just from the southwest like Arizona, dry climates tend to be better for people with respiratory problems and they might have lower risk of pneumonia.

My point is the idea of how you gather data and create a training data set is something that requires a significant amount of thought and care to ensure success.

The other major problem is that every AI system is unique in the implications of a bad result. In the above case, its life threatening, in terms of a recommendations engine for Netflix, it means I miss a movie I might like. Very different results and impact on lives. And this cannot be ignored as it really does figure into the overall equation.

So the question becomes how do we ensure that we are doing the right thing with AI solutions? The answer is to take the time to decide on what values as an organization we will embrace at our core for these solutions. We need to make value driven decisions on what type of implications we are concerned about and let those values guide our technology decisions.

For a long time values have been one of the deciding factors between successful organizations and unsuccessful ones. The one example that comes to mind was the Tylenol situation where a batch of Tylenol had been tampered with. The board had a choice, pull all the Tylenol on market shelves for public safety and hurt their shareholders or protect share holders and deny. The company values indicated that customers must always come first and it made their decision clear. And it was absolutely the right decision. I’m giving a seriously abridged version, but here’s a link to an article on the scare.

Microsoft actually released an AI School for business to help customers to get a good starting point for figuring that out. They also made several tracks for a variety of industries to help with what should be considered for each industry. Microsoft has also made their position on ethical AI very clear in a blog post by Company President Brad Smith and Our Approach: Microsoft AI

Below are the links to some of the training courses on the subject:

Along side this, there has been a lot of discussion around this, from some of the biggest executives in the AI space, including Satya Nadella:

But one of the most interesting voices I’ve heard with regard to the ethics and future of AI is Calum Chace, and I would tell you to watch this as it really goes into the depth of the challenges and ways that if AI is not handled responsibly we are looking at another major singularity in human evolution:

This is a complicated and multi-faceted topic that is great food for thought on a Friday. Empathy is the most important elements of any technology solution as these solutions are having greater and greater ramifications on society.

Where do I start – Microsoft AI

In the interest of helping to navigate the information available out there, I’ve been putting out there ideas for this “Where Do I start” series on the blog. Right now as I previously mentioned I’ve been studying for the AI-100 exam, and as part of that effort I found a lot of resources online, and I thought I’d share these in the interest of helping others.

There are a wealth of resources out there and I want to make sure I focus your attention on resources related to Microsoft AI and how you can leverage these services as accelerators for your own application development.  I wanted to draw your attention to a lot of the key resources for getting started.

Learning Videos:

 

Now additionally I have done some work on my github implementing the face api, which is available here:

https://github.com/KevinDMack/FacialSearchDemo

Building a facial search in less than 2 hours

So, I’ve been pretty up front that I’ve been expanding my current skills in the data science and AI space, and its been a pretty fun process, and I wanted to point everyone to a demo I was able to build out very quickly.

Facial Searching, is a pretty common use case, so much so that we see it everywhere, Google Photos allows you to tag people in photos and indexes them for you automatically. Facebook makes suggestions for tagging people when you upload new pictures.

So I wanted to build a service that would take a set of selected images or 3 members of my family, and build a solution that would allow me to run any family photo through and it would search for the known members of my family to apply. Seems like a pretty basic use-case, but I’ve been wanting to get some hands on experience with the Azure Cognitive Services.

So I researched and read through our documentation, and decided before I started I was going to set aside 2 hours and see how far I could get to do the following:

  • Build a console app to read in images of 3 family members.
  • Build logic to upload an image and read back attributes about that person, things like age, gender, hair color, glasses, etc.
  • Build logic to search and match faces of people in a photo with the library that was previously uploaded.

The cool news is that I was able to implement all that logic. The full solution can be found here.

So to start, I focused on the first use case, and Azure Cognitive services has this concept of “PersonGroups” that can be leveraged with the SDK. For starters you need to install the sdk from nuget, and this is the required package.

Microsoft.Azure.CognitiveServices.Vision.Face

the first part is the key part is the client, which I configured in a parent class as follows:

public class BaseFaceApi
    {
        protected string _apiKey = ConfigurationManager.AppSettings["FaceApiKey"];
        protected string _apiUrl = ConfigurationManager.AppSettings["FaceApiUrl"];
        protected string _apiEndpoint = ConfigurationManager.AppSettings["FaceApiEndpoint"];
        protected FaceClient _client;

        protected void InitializeClient()
        {
            _client = new FaceClient(new ApiKeyServiceClientCredentials(_apiKey));
            _client.Endpoint = _apiEndpoint;
        }
    }

This allows for configuration to be the app.config, and this face client will be leveraged for all operations to hit the API.

The Face API leverages this concept of “PersonGroups” and “Persons” to handle the library of faces you are going to compare against. The process is broken into 4 parts.

  • Create the group
  • Create the person
  • Register Images for that person
  • Train the Model

If you review the source code you will find that I have broken these out to separate methods. The benefit of creating the groups is that you can limit your searching to specific groups, and have your application recognize the differences between groups.

Once you completed loading these images and “Persons” into the service you are ready to search through this repository by uploading an image. This is done with the following code:

public async Task<Dictionary<Guid,FacePerson>> IdentifyFaces(string filePath,string groupID)
        {
            InitializeClient();

            Dictionary<Guid,FacePerson> ret = new Dictionary<Guid, FacePerson>();

            using (Stream s = File.OpenRead(filePath))
            {
                // The list of Face attributes to return.
                IList<FaceAttributeType> faceAttributes =
                    new FaceAttributeType[]
                    {
            FaceAttributeType.Gender, FaceAttributeType.Age,
            FaceAttributeType.Smile, FaceAttributeType.Emotion,
            FaceAttributeType.Glasses, FaceAttributeType.Hair
                    };

                var facesTask = await _client.Face.DetectWithStreamWithHttpMessagesAsync(s,true,true,faceAttributes);
                var faceIds = facesTask.Body.Select(face => face.FaceId.Value).ToList();

                var identifyTask = await _client.Face.IdentifyWithHttpMessagesAsync(faceIds,groupID);
                foreach (var identifyResult in identifyTask.Body)
                {
                    Console.WriteLine("Result of face: {0}", identifyResult.FaceId);
                    if (identifyResult.Candidates.Count > 0)
                    { 
                        // Get top 1 among all candidates returned
                        var candidateId = identifyResult.Candidates[0].PersonId;
                        var person = await _client.PersonGroupPerson.GetWithHttpMessagesAsync(groupID, candidateId);

                        var fp = new FacePerson();
                        fp.PersonID = person.Body.PersonId;
                        fp.Name = person.Body.Name;
                        fp.FaceIds = person.Body.PersistedFaceIds.ToList();

                        var faceInstance = facesTask.Body.Where(f => f.FaceId.Value == identifyResult.FaceId).SingleOrDefault();
                        fp.Age = faceInstance.FaceAttributes.Age.ToString();
                        fp.EmotionAnger = faceInstance.FaceAttributes.Emotion.Anger.ToString();
                        fp.EmotionContempt = faceInstance.FaceAttributes.Emotion.Contempt.ToString();
                        fp.EmotionDisgust = faceInstance.FaceAttributes.Emotion.Disgust.ToString();
                        fp.EmotionFear = faceInstance.FaceAttributes.Emotion.Fear.ToString();
                        fp.EmotionHappiness = faceInstance.FaceAttributes.Emotion.Happiness.ToString();
                        fp.EmotionNeutral = faceInstance.FaceAttributes.Emotion.Neutral.ToString();
                        fp.EmotionSadness = faceInstance.FaceAttributes.Emotion.Sadness.ToString();
                        fp.EmotionSurprise = faceInstance.FaceAttributes.Emotion.Surprise.ToString();
                        fp.Gender = faceInstance.FaceAttributes.Gender.ToString();

                        ret.Add(person.Body.PersonId, fp);
                    }
                }
            }

            return ret;
        }

One key note above is the face attributes, this identifies the attributes you would like the service to review and discover. You can limit this list as you like.

Please feel free to review the sample code and I hope you find a great use-case. For me, a very cool project that is on my list next is to build a camera with a raspberry pi that captures people who come to the door and compares them against a known database of people.

It’s also worth mentioning that this service is fully available in Azure Government for customers that have requirements to be deployed in a sovereign cloud.