Why so serious? Smile analysis with Azure and .NET MAUI

Azure Cognitive Services is a cloud service that provides a plethora of AI-powered functionality. One of these is the Face API, which you can use to detect faces and, among other things, determine facial attributes such as age, emotion and smile. In this post I will show you how you can use this service and .NET MAUI to create an app where you can take a picture of yourself and get an objective result of how much you’re smiling!


In this quickstart guide from Microsoft on how to use the Face client library, make sure you follow the steps in the Prerequisites section where you will set up the Face resource in the Azure portal and note down the key and endpoint for the resource you created.

Create your .NET MAUI app

Once that’s done, create a new .NET MAUI project using either Visual Studio 2022 Preview or the dotnet cli.

Add the NuGet package Microsoft.Azure.CognitiveServices.Vision.Face to your project. Check the “include prerelease” and select the newest version, which is currently 2.8.0-preview.2.

Edit template

Edit the OnCounterClicked method from the template. First we need to take the photo, so we’ll use the built-in MediaPicker for that:

Remember to add the necessary permissions to AndroidManifest.xml or Info.plist in order to access the camera. When you run the app, an error will tell you which permissions you need to add.

Next we need to initialize the Face client library. We’ll need the subscription key and endpoint for the resource that we created in Azure for this. You’ll find this under Resource Management -> Keys and Endpoint under the Face API resource. In this example, I’ve created a Constants.cs class where I have stored those values:

After that we need to tell the API what kind of face attributes we want to retrieve. For that we define a FaceAttributeType array and add the desired attributes. Like mention before, we can retrieve things such as age, smile, emotion, makeup(!) and more. For this exercize, we just want the smile type.

Now we can start the face detection. The Face API accepts an image in the form of a stream or a URL. Since the PickPhotoAsync() method gives us the file path of the image, we’ll use that to convert the file into a stream first:

Then we can send the stream to the Face client using the DetectWithStreamAsync() method. We’ll send in the filestream as a parameter along with the faceAttributes array that we defined earlier.

This method will return a list of detected faces for the image. We’ll assume that we’re taking a selfie here and go ahead and return the default first value. Then we’ll access the FontAttributes.Smile property. This value is in the form of a double, returning a value between 0 and 1 where 1 is a super big smile.

Note that all the other properties under FaceAttributes will be null, since we haven’t explicitly added them to our array of requested face attributes.

Now we can show the result to the user and tell them how happy they are! We multiply the smileScale by 100 to get a percentage value and show it on a Label that we add in the XAML. I’ve set the x:Name to HappyScaleLabel for mine.

Here is a video of the whole thing in action, with yours truly:

Wrapping up

In this post I showed the power of the Face API and how easily you can use device features in .NET MAUI without having to pull in extra dependencies. I hope you found this useful and that you take a look at the Cognitive Services that Azure provides and get inspired to try out something similar like this.

This will be my last post before .NET MAUI hits GA, which is expected to happen at Build on the 24th of May of this year. I’m pretty excited and I encourage you to tune in on the keynote to see what other things they will be announcing this year.

As always, I’ve provided a sample on GitHub with all this. Just remember to add your own Azure subscription key and endpoint to test it out.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.