MaskPass

Stack


Java
TensorFlow
XML
Android Studios
← Back to projects
Abstract
Research
Process
Code Breakdown
Reflection
Reference

Abstract


I developed MaskPass to determine if a person is wearing a non-medical mask using the phone camera, in response to the COVID-19 mandates. My app was developed with Java in Android Studios and implements a TensorFlow machine learning model to improve mask recognition when capturing an image. The learning model was created using Lobe and references a large number of facial images of people wearing masks and non-masked.


Imagine an empty office space where visitors are required to wear a mask, and there is no receptionist present to verify. The Face Mask Detection app assists with this mask verification process through the visitor’s phone camera linking to the building’s server. My app would scan a visitor’s face to verify if they were wearing a face mask to proceed. Once scanned, the app would then notify the system if a person were valid to enter and unlock the main door autonomously. My app can be expanded further in the future - to be run on an entrance kiosk instead of a phone.


MaskPass demonstrates the potential for how we can optimize future pandemic outbreaks with the use of existing image processing technologies. As a student attending in-person classes and commuting by transit there is always a risk to contract COVID-19 especially if students/passengers choose not to wear a mask. My app can verify if train passengers are wearing a mask before proceeding onboard if there is another serious outbreak that requires masking.

Research


My project is driven by two face mask detection research. Steven King and the research by Mohammed Loey covers face mask detection using deep learning frameworks.

Steven discusses the development of the Health Greeter Kiosks, stand-up kiosks that can detect if people are wearing masks and are social distancing. Using an RGB-D camera and deep learning algorithm framework, these kiosks can validate if individuals are following the COVID-19 mandates [1].


Mohammed discusses detection of medical face masks in real life images using ResNet-50 deep transfer learning model and YOLOv2 object detection algorithm. His detection algorithm is described have a precision percentage of 81% [2].


I discovered that these innovations discussed in these research papers are not widely utilized in western society despite being seemingly resourceful for our time of the pandemic. Steven was able to distribute his health greeter kiosks across his campus, University of North Carolina (UNC) [3]. However, the project is limited to only this location.

old_PHP
Figure 1: The graphical user interface of a Health Greeter Kiosk. Red Circles mean users are violating social distancing measures and green circles means distance is safe. Bounding boxes detect the user’s face and validate if the user is wearing a mask or not.

The face mask detectors discussed in these research papers relies on the approval and funding from the government to setup at large scale. From the mask order being repealed as of March 11, 2022, by the BC government [4], face mask detection setups would unlikely be implemented in public places. Despite the repeal, BC COVID cases are still rising and mask wearing has been proven to reduce the risks [5].

As a potential solution I propose to innovate on the mentioned face mask detectors as an accessible tool without government restrictions. Our phones contain many sensors and capabilities already and implementing the face mask detection algorithm is doable. Using our phones to identify if a person is wearing a mask, can benefit care home, clinics or other organizations that take health validation seriously. With my MaskPass app, the user can scan their face and validate if their wearing a mask before entering a public space. With more production time, MaskPass could potentially ping a receptionist that the visitor is wearing a mask to allow the masked visitor into the premises.


Process


The process of developing MaskPass involved troubleshooting key features as separate applications. This involved creating multiple applications such as an app that accesses your image file, taking a picture and displaying on the activity, and processing the image using the TensorFlow Lite Model.


I began my process by researching the different learning models that can be used to detect face masks. The YOLOV2 was almost considered to be used as my algorithm of choice, offering accurate rates. I finally opted in using the TensorFlowLite learning model as my preference, since this offers accessibility and easy to learn functions for android and mobile application.


TensorFlow.org [6] was especially resourceful in providing me with setup details such as adding the TensorFlow implementations in the dependencies, creating a learning model directory in android studios, and the methods and libraries available in android studios.


Using Lobe.ai I was able to train and export a TensorFlowLite model by labeling through a Kaggle dataset of masked people [7].


Making sure the Android Studio’s Gradle plugin was up to date was one challenge encountered throughout the development process. I discovered that the TensorFlowLite model was only compatible in the newer Gradle plugins in android studios. I had to ensure all my other code snippets to access the camera and access the file location was also compatible with the newer plugin.

Once I implemented the TensorFlowLite methods of classifying an image into my Java files, my next step was to create a camera function and a method to initialize classification method. My approach to this was to add one functionality at a time to ensure I can catch any errors or runtime-problems directly.



Code Breakdown


Classifying a face

The TensorFlowmodel was created by lobe.ai and is used as the learning model when classifying a captured image. I placed the model as a .tflite (tensorFlowLite) under my java directories. The implementation for using the Tensorflow model must be located under dependencies in build.gradle. Now I can reference the TensorFlow frameworks in the MainActivity.

Figure 2: A snippet of my tensorFlowLite model viewed in Netron.

Camera Action

My application features a camera functionality, after pressing a button MaskPass would have access to your device’s camera and open the camera app. This is possible by calling ACTION_IMAGE_CAPTURE in the intent when the user presses the button via the onClick(v) method as seen in Figure 3.


  camera_open_id.setOnClickListener(new View.OnClickListener() {
  public void onClick(View v) {
  Intent camera_intent = new Intent (MediaStore.ACTION_IMAGE_CAPTURE);
  startActivityForResult(camera_intent, pic_id);
              }
          });
  
Figure 3: Code snippet for the camera button.

At the end of my code I had implemented the onActivityResult() method to retrieve the captured image result from the camera_open_id.setOnClickListener(). After meeting the if statement conditions of matching the photo id and accessing the camera app, the captured image is then converted into a bitmap type. The pixels from the captured image is retrieved from the camera app by using data.getExtras().get(“data”). Then the retrieved pixels are displayed on the app layout as a bitmap by using .setImageBitmap().


    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
      if (requestCode == pic_id && resultCode == RESULT_OK) {
      Bitmap image = (Bitmap) data.getExtras().get("data");
      click_image_id.setImageBitmap(image);
      ...      

  
Figure 4: Code snippet for onActivityResult().

Implementing Tensorflow Methods

After taking your picture using the camera app, I implemented the method classifyImage which runs when the user selects the scan button. This method initiates the model in the .tflite and scans through the pixel width and height of the taken image using a for loop as seen in Figure 5.


  image.getPixels(intValues, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
  int pixel = 0;
  for (int i = 0; i < imageSize; i++) {
      for (int j = 0; j < imageSize; j++) {
          int val = intValues[pixel++]; // RGB
          byteBuffer.putFloat(((val >> 16) & 0xFF) * (1.f /255.f));
          byteBuffer.putFloat(((val >> 8) & 0xFF) * (1.f /255.f));
          byteBuffer.putFloat((val & 0xFF) * (1.f / 255.f));

     }
}
Figure 5: Code snippet of ClassifyImage() method's for-loop.

After the image is processed, an if statement determines the confidence threshold of the likelihood of a mask detected. The confidence level is determined by the .tflite output function. If the output meets the threshold, then a mask is detected and will display a “You may enter” message on the screen.


  float[] confidences = outputFeature0.getFloatArray();
float maxPos = 0;
float maxConfidences = 0;
for (int i = 0; i < confidences.length; i++) {
if (confidences[i] > maxConfidences) {
maxConfidences = confidences[i];
maxPos = i;
}
}

Figure 6: Code snippet of ClassifyImage() for determining if a captured image meets confidence threshold.

Reflection

Doing this project alone, I had learned much about learning models and image processing in android studios. Working with android studio I had encountered some difficulties adjusting from an Eclipse IDE , specifically with the bufferedImage image type. I had learned that Android Studios do not have a bufferedImage type and it can be replaced by the Bitmap image type. In many ways Bitmap image behaves like bufferedImage by converting your processed image into pixels and looping through the width and height of the image.


I also learned that creating and training learning model is simple and there are many resources such as Lobe.ai that can help generate a model without complexities. Overall, I can see potential in MaskPass being used by businesses who are concerned with public safety, prompting clients to use this app as validation. Despite the easing of the mask mandate, I feel that having validation for wearing a mask is still a valuable precaution to take, as cases are still growing today.


Reference

[1] Steven King, Max Hudnell, "Health Greeter Kiosk: Tech-Enabled Signage to Encourage Face Mask Use and Social Distancing," ACM, 6 August 2021. [Online]. Available: https://dlnext.acm.org/doi/fullHtml/10.1145/3450550.3465339. [Accessed 25 February 2022].

[2] Mohammed Loey, et al. "Fighting against COVID-19: A novel deep learning model based on," ELSEVIER, 12 November 2020. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7658565/. [Accessed 25 February 2022].

[3] E. Tsai, "UNC increases number of health greeter kiosks around campus to encourage mask-wearing," The Daily Tar Heel, 17 March 2021. [Online]. Available: https://www.dailytarheel.com/article/2021/03/university-health-greeter-kiosks. [Accessed 25 February 2022].

[4] "B.C. takes next step in balanced plan to lift COVID-19 restrictions | BC Gov News", News.gov.bc.ca, 2022. [Online]. Available: https://news.gov.bc.ca/releases/2022HLTH0081-000324. [Accessed: 11 April 2022].

[5] Wang, Y., Deng, Z. and Shi, D., 2022. How effective is a mask in preventing COVID‐19 infection?. [Online] National Library of Medicine. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7883189/.[Accessed 11 April 2022].

[6] "Android quickstart | TensorFlow Lite", TensorFlow, 2022. [Online]. Available: https://www.tensorflow.org/lite/guide/android. [Accessed: 11 Apr 2022].

[7] "Face Mask Classification", Kaggle.com, 2022. [Online]. Available: https://www.kaggle.com/datasets/dhruvmak/face-mask-detection. [Accessed: 11- Apr- 2022].


Contact Me