Face Mask Detection Model Using VGG16

This is the second part of face mask detection using OpenCV project. We perform pre-process on face mask data and load it into collab earlier. Here is a link you can visit this first. In this blog, we make a model using VGG16. Before start coding, we need to know about what is vgg16 and convolutional neural network. So let’s begin to make model

convolution neural network and VGG16?

Vgg16 is a convolutional neural network model propose from Oxford University. The paper called “very Deep Neural Network for large scale image recognization”. it’s a very popular model which is train over 14 million images data belongs to 1000 class. This model achieves 92.7% accuracy for the train this data. The VGG16 is training for RGB image. The input layer of VGG16 has 224*224 RGB image. The architecture of VGG16 here.

VGG16 Architecture

Now let’s start to codding part.

Import Required library

Firstly now let’s import library for face mask detection. If you get an error while import library. Simply install them using the following command through terminal.

pip install package_name 

For face mask detection we use following libraries.

from tensorflow.keras.models import Model
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten, Input
from tensorflow.keras.optimizers import Adam
from livelossplot import PlotLossesKeras
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau

Model Initialize

Now create VGG16 model with input size (100,100) as we reshape earlier when we import data in colab. From here you can see this. Following code used for the base model of vgg16. And we have 1 class for binary output 0 and 1 represent face to have a mask and no mask respectively. So we declare classes=1.

base_model = VGG16(weights='imagenet', include_top=False, input_tensor=Input(shape=(100, 100, 3)), classes=1)

Edit output layer according to our requirement. Sigmoid activation function is use for output layer.

new_model = base_model.output
new_model = Flatten()(new_model)
new_model = Dense(256, activation='relu')(new_model)
new_model = Dense(1, activation='sigmoid')(new_model)

Don’t train model with existing weight. So following line of code help to remove existing weight from VGG16 model.

for layer in base_model.layers:
  layer.trainable = False

Create Model

Now create our model for face mask detection input with the base model and output is a new model which create by us. Following code, used to create a model object.

model = Model(inputs=base_model.input, outputs=new_model)

Compile Model

compile our model using ADAM optimizer with a learning rate of 1e-4 and binary cross-entropy is using for loss function. Metrics contain accuracy of the model.

opt = Adam(lr=1e-4)
model.compile(loss='binary_crossentropy' ,optimizer=opt, metrics=['accuracy'])

We can seen summery of model by following code.

Model summary()
model summary

Model Train

it’s time to train the model. We can fit the model by using the fit() function. The fit function that takes some argument like data, epochs, validation split, callback etc. for our model we pass x_train and y_train which we create earlier. For 15 epoch we train our model. In the callback, we use live plot and reduce the learning rate. All the parameter are pass to the fit() function and train our model.

EPOCHS=15
reduce_lr=ReduceLROnPlateau(monitor='val_loss',factor=0.1,patience=2,min_lr=0.00001,mode='auto')
callback=[PlotLossesKeras(),reduce_lr]

history = model.fit(X_train, y_train, epochs=EPOCHS, validation_split=0.1
                    ,callbacks=callback)

As you can see this is our model accuracy and loss in line graph. Which is real time generate graph while train our model.

Model Plot

Following code is using to save model. Which is use on opencv for real time prediction.

model.save("mask_detector.model", save_format="h5")

At last here we test our model with test data and seen there result.

import tensorflow as tf 

cat = ('no_mask', 'mask')
fig, ax = plt.subplots(1, 10, figsize=(15, 15))

for i in range(10):
  image = random.choice(X_test)
  predictions = model.predict(image.reshape(-1, 100, 100, 3))
  title = cat[int(predictions[0][0])]

  ax[i].imshow(image.astype('uint32'))
  ax[i].title.set_text(title)

fig.show()

Conclusion

This article is the second part of a face mask detection project. here is the first part. If you have any question regarding this project please feel free to comment below or contact us. The whole project file is available in the last article of this series. I hope this article may help you. if you want a free python course here is a link.

Leave a Reply