SlideShare una empresa de Scribd logo
1 de 10
Descargar para leer sin conexión
FMD-Yolo: An efficient face mask detection method for COVID-19
prevention and control in public☆
Peishu Wu a
, Han Lia
, Nianyin Zenga,
⁎, Fengping Lib
a
Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
b
Institute of Laser and Optoelectronics Intelligent Manufacturing, Wenzhou University, Wenzhou 325035, China
a b s t r a c t
a r t i c l e i n f o
Article history:
Received 31 August 2021
Received in revised form 8 October 2021
Accepted 19 November 2021
Available online 25 November 2021
Keywords:
Face mask detection
COVID-19
Improved YoloV3 algorithm
Feature extraction and fusion
Coronavirus disease 2019 (COVID-19) isa world-wide epidemic and efficientprevention and control of thisdisease
has become the focus of global scientific communities. In this paper, a novel face mask detection framework FMD-
Yolo is proposed to monitor whether people wear masks in a right way in public, which is an effective way to block
the virus transmission. In particular, the feature extractor employs Im-Res2Net-101 which combines Res2Net
module and deep residual network, where utilization of hierarchical convolutional structure, deformable convolu-
tion and non-local mechanisms enables thorough information extraction from the input. Afterwards, an enhanced
path aggregation network En-PAN is applied for feature fusion, where high-level semantic information and
low-level details are sufficiently merged so that the model robustness and generalization ability can be enhanced.
Moreover, localization loss isdesigned and adoptedinmodel training phase,and Matrix NMS method isusedinthe
inference stage to improve the detection efficiency and accuracy. Benchmark evaluation is performed on two pub-
lic databases with the results compared with other eight state-of-the-art detection algorithms. At IoU = 0.5 level,
proposed FMD-Yolo has achieved the best precision AP50 of 92.0% and 88.4% on the two datasets, and AP75 at IoU
= 0.75 has improved 5.5% and 3.9% respectively compared with the second one, which demonstrates the superi-
ority of FMD-Yolo in face mask detection with both theoretical values and practical significance.
© 2021 Elsevier B.V. All rights reserved.
1. Introduction
Coronavirus Disease 2019 (COVID-19) is a world-wide spreading ep-
idemic, which severely threatens the life of all human beings since its
outbreak. Recently, the reported mutation of COVID-19 even worsens
the situation. How to handle this disaster has become a focused issue
in global scientific community and many efforts have been made, such
as the development of effective vaccine, timely treatment approach,
etc. According to the researches [25], COVID-19 mainly spreads by drop-
let and aerosol transmission, which can occur in the social interaction
with an infected person. Therefore, proper wearing of protective
masks becomes a scientific and available way of blocking the spread of
COVID-19 as the masks can filter and block virus particles in the air
[18], which is also recognized and advocated by the government. In
order to ensure the effective implementation of above measures, it is
necessary to develop an automatic detection method to detect whether
individuals in public wear masks in the right way, which also facilitates
intelligent prevention and control of the epidemic. Besides, people are
also suggested to keep distances in public places and take routine tem-
perature screening [1], all of which are convenient and effective
methods for reducing transmission of COVID-19.
In the past few years, with the rapid development of deep learning,
computer vision (CV) has been widely used in facial expression recogni-
tion [12,13,41], disease diagnosis [42] and so on. At the same time, CV
techniques have boomed and many classical object detection methods
have emerged, including one-stage, two-stage and other anchor-free al-
gorithms like Faster RCNN [26], RetinaNet [20], YOLOv3 [29], YOLOv4
[3], CornerNet [17], FCOS [34], etc. Existing algorithms for face and
mask detection can be divided into two classes, where one is real-time
detection method that pursues a fast inference speed [2,15,24]; the
other is the high-performance detection method which prioritizes the
accuracy. So far many in-depth researches have been carried out on
face mask detection [24,16,32]. In [2], inspired by MobileNet and SSD,
a lightweight model BlazeFace is proposed, which can reach sub-
Image and Vision Computing 117 (2022) 104341
☆ This work was supported in part by the Natural Science Foundation of China under
Grant 62073271, in part by the UK-China Industry Academia Partnership Programme
under Grant UK-CIAPP-276, in part by the International Science and Technology
Cooperation Project of Fujian Province of China under Grant 2019I0003, in part by the
Fundamental Research Funds for the Central Universities under Grant 20720190009, in
part by The Open Fund of Engineering Research Center of Big Data Application in Private
Health Medicine under Grant KF2020002, and in part by the Fujian Key Laboratory of
Automotive Electronics and Electric Drive (Fujian University of Technology) under Grant
KF-X19002.
⁎ Corresponding author.
E-mail address: zny@xmu.edu.cn (N. Zeng).
https://doi.org/10.1016/j.imavis.2021.104341
0262-8856/© 2021 Elsevier B.V. All rights reserved.
Contents lists available at ScienceDirect
Image and Vision Computing
journal homepage: www.elsevier.com/locate/imavis
millisecond face detection speed and is suitable for deployment on mo-
bile devices. [15] proposes a deep regression-based user image detector
(DRUID), which makes segmentation-based deep convolution algo-
rithms available in the mobile domain and also has great advantages
of saving time. A face detection model that integrates Squeeze-
Excitation Network and ResNet is designed in [44], where a regression
network is applied to track and localize faces in video sequences. As
for mask detection, [32] applies YOLOv3 and Faster RCNN to detect
whether a mask is worn or not, and [24] proposes a transfer learning
method based on ResNet50 and YOLOv2 for the detection of medical
masks. In addition, RCNN series algorithms are adopted to realize a
multi-task problem in [16], including social distance calculation and
mask detection.
According to above discussions, it is found that few of current
methods have highlighted the unique characteristics of face mask de-
tection, and they generally handle this problem as a conventional object
detection task. However, in practice not only detection of both mask and
face is necessary, but localization of the mask in face is also of vital sig-
nificance, which can estimate whether the mask is worn in the right or
wrong way. In order to meet the special demand, in this paper, a face
mask wearing oriented detection scheme is proposed along with the
corresponding algorithm, as shown in Fig. 1. In particular, three scenar-
ios have been defined, including not wearing masks and wearing mask
in the right/wrong way. Before splitting the data, preprocessing opera-
tions are performed to eliminate dirty data and adjust image size. After-
wards, anchor clustering is performed to select suitable anchor scale of
the training data, which benefits model training by introducing in spe-
cific designed prior knowledge for different tasks; and then other data
augmentation operations are applied to enrich the samples. By means
of sufficient training epochs, the best model and corresponding weight
parameters will be saved and evaluated. Finally, once the precision
reaches a predefined criterion, the model will be deployed on the termi-
nal device to realize practical application. It is remarkable that when the
video streams or pictures are fed into terminal devices, in-time detec-
tion results can be outputted and visualized through the designed algo-
rithm. Consequently, intelligent monitoring of mask wearing can be
accomplished by deploying relevant devices in places with high popula-
tion flow-rate such as airports and supermarkets, which can effectively
improve the efficiency of COVID-19 epidemic prevention and control.
The major contributions of this paper are summarized as follows:
1) A general face mask detection framework is proposed, which is mask
wearing oriented and can be easily extended for secondary develop-
ment and improvement.
2) A novel face mask detection algorithm FMD-Yolo is proposed, where
the feature extractor, feature fusion operation and post-processing
techniques are all specifically designed.
3) Experimental results on two public datasets have fully demonstrated
the superiority of FMD-Yolo, which outperforms several state-of-
the-art methods. Moreover, the high generalization ability enables
application in other situations.
The remainder of this paper is organized as follows. Some preliminaries
of related work are provided in Section 2 and in Section 3, and detailed
implementations of FMD-Yolo are elaborated, including data prepro-
cessing, analytic methods, overall structure and post-processing tech-
niques. Benchmark evaluation results and parallel comparisons with
comprehensive discussions are presented in Section 4, and in Section 5,
conclusions are drawn.
2. Related work
For a comprehensive and in-depth understanding of proposed face
mask detection algorithm, some background information are presented
in this section, including the feature extractor, feature fusion methods
and YoloV3 framework.
2.1. Feature extractor based on residual structure
In the field of computer vision (CV), one important research is to ef-
fectively extract features from multi-scale information. With the rapid
development of deep learning techniques, feature extraction has gradu-
ally transformed from manual design to automatic construct by algo-
rithms [14]. In object detection tasks, corresponding algorithm is
known as the backbone, and up to now plenty of networks have
emerged such as VGG [30], ResNet [8], DenseNet [9], DLA (Deep Layer
Aggregation) [40]. One practical strategy applied in those methods is
to enrich connections among feature maps, like stacking layers and set-
ting skip connections.
It should be pointed out that ResNet is one of the most widely ap-
plied backbone which designs a residual block to overcome the degra-
dation problem associated with training a deep network. Res2Net [5]
constructs a hierarchical progressive residual connection, which is con-
ducive to improving the multi-scale feature expression ability of the
network at a finer level. Major innovation of Res2Net is to replace 3 ×
3 convolution in the bottleneck of ResNet with hierarchical cascaded
feature group convolution, and this modification is shown in Fig. 2.
The design of hierarchical convolution facilitates enlarging the receptive
Fig. 1. General flowchart of face mask detection algorithm (FMDA) framework.
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
2
field within the bottleneck block, where more refined multi-branches
can achieve better capture of the objects. To be specific, Res2Net inte-
grates s groups of convolution kernel and has s  w channels in total,
where w represents the width of network. With larger value of s,
more abundant receptive fields will be allowed to learn multi-scale fea-
tures. In this paper, s and w are set to 4 and 26 respectively.
As shown in Fig. 2, bottleneck in Res2Net is divided into s subsets
after the 1 × 1 convolution, which are defined as Xi; i ∈ f1; 2; ⋯; sg.
Each subset has the same size as the input, but the number of channels
becomes 1
s. Except for X1, each sub-feature is performed a 3 × 3 convo-
lution operation Cið  Þ, i ∈ 2, 3, ⋯, s
f g and lateral connections are
adopted for enhancing interaction. The hierarchical residual connection
enables the changes of receptive fields at a finer grained level to capture
details and global characteristics. Notice the 3 × 3 convolution in each
group can receive information from the left branch, thus the final output
can contain multi-scope features, which is denoted as Yi, i ∈ 1, 2, ⋯, s
f g
and can be obtained as:
Yi ¼
Xi i ¼ 1;
Ci Xi
ð Þ i ¼ 2;
Ci Xi þ Yi−1
ð Þ 2  i ⩽ s:
8

:
ð1Þ
2.2. Feature pyramid network
In early object detection algorithms, output of the backbone network
will be directly fed into the subsequent component, which is the so-
called detection head. However, these output feature maps usually can-
not make full use of information to characterize objects at multiple
scales, and simultaneously the resolution is too small compared with
the original input. To overcome such limitation and enhance detection
accuracy and reliability, feature fusion methods come to another re-
search hot-spot, which can merge multi-scale features at different
levels.
In [21], the authors propose a single shot detector (SSD), which
directly uses feature maps of different stages to achieve multi-scale
detection, but there is no interconnection among individual feature
maps. Feature pyramid network (FPN) [19] applies lateral concate-
nation and up-sampling operation to effectively merge high-level se-
mantic features in top layers and high-resolution information in
bottom layers. It is remarkable that many other feature fusion
methods are designed based on FPN, like path aggregation network
(PANet) [23], which adds an extra bottom-up path. This simple bidi-
rectional fusion is proven helpful as information in bottom layer can
be delivered upwards, and afterwards, more complicated bidirec-
tional fusion methods have emerged. For example, the bi-
directional feature pyramid network (BiFPN) [33] adopts cross-
scale connection to the PANet. By means of multiple repetitions, in-
formation of each stage is able to be sufficiently integrated, even
the fused feature maps can be reused.
To sum up, the feature map fusion approaches have developed from
simple manner like top-down and bidirectional fusion to more complex
ones, as is shown in Fig. 3. Ciði ∈ NÞ denotes the feature maps generated
at each stage of the backbone network and Piði ∈ NÞ denotes obtained
output after multi-scale feature fusion. Define the fusion operation as
fusion 
ð Þ, and the relationship between output and input can be
depicted as:
Pi; Piþ1; ⋯; Piþk ¼ fusion Ci; Ciþ1; ⋯; Ciþk
ð Þ; i; k ∈ N
ð Þ ð2Þ
2.3. You only look once (YOLO)
YOLO (You only look once) series algorithms have already experi-
enced four versions [27–29,3], which are deemed as magnificent
masterpieces in the field of object detection and have been widely
applied into various scenarios, including road marking detection
[39], license plate recognition [10] and agricultural production [37],
etc. In particular, YoloV3 [29] is recognized as an important break-
through, which makes a trade-off between detection precision and
speed. Structure of YoloV3 is illustrated in Fig. 4, where backbone
network adopts the DarkNet53 with fully connected layer removed,
and primary components include convolution blocks and stacked re-
sidual units (Res Unit), which are denoted as CBR (Conv BN Relu) and
ResN respectively. Notice each CBR block contains convolution, batch
normalization and leaky relu activation; N in ResN is the number of
stacked residual units.
As shown in Fig. 4, another unique characteristic of YoloV3 is the im-
plementation of multi-scale prediction, i.e., detection at 32×, 16× and
8× down-sampling for the input images, which generates feature
maps with size of 13 × 13, 26 × 26 and 52 × 52 for detection of large,
medium and small size objects, respectively. It is also remarkable that
feature fusion manner in YoloV3 is similar to the idea of FPN, which
takes full advantages of high-level semantic features and low-level geo-
metric information so that the overall framework can have strong fea-
ture presentation ability and consequently, more accurate detection
will be achieved.
Fig. 2. Comparison of conventional bottleneck structure with Res2Net module. Fig. 3. Four types of feature fusion methods.
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
3
3. Detailed implementations of proposed FMD-Yolo
In this section, the overall framework of the proposed FMD-Yolo
method is elaborated, including data preprocessing, model implemen-
tation and post-processing method.
3.1. Data pre-processing and analysis methods
Fig. 1 presents the procedure of applied data preprocessing, which
contains sample and batch transformations. At first, Mixup [43] method
is applied along with other traditional data augmentation operations
like random color distort, expansion and flip, etc. Traditional methods
enhance the robustness of model training by introducing noises and
perturbations in a single image, but Mixup helps to enhance the gener-
alization ability of the model and improve the stability of training pro-
cess by linearly interpolating two random samples and corresponding
labels in training set to generate new virtual samples, which can be
depicted as:
~
x ¼ λxi þ 1−λ
ð Þxj
~
y ¼ λyi þ 1−λ
ð Þyj
ð3Þ
where xi and xj represent two different types of the input, yi and yj are
corresponding one-hot labels, and λ is mixing coefficient calculated by
beta distribution λ∼Beta α, α
ð Þ, α ∈ 0, ∞
ð Þ (α and β both take 1.5 in this
paper).
Secondly, input images are resized to 640 × 640 and then normal-
ized by Normalize operation. To further match training data with
FDM-Yolo model, k-means clustering and mutation operation in genetic
algorithm are used to select the best combination of prior anchor sizes,
i.e., the “Anchor Cluster” in Fig. 1 [4]. At last, the size of anchors is further
optimized and determined by mutation operation in genetic algorithm.
It is noteworthy that an appropriate anchor size setting benefits the
model convergence and useful prior knowledge can be learned, which
can speed up the model training process and obtain more matching re-
sults with the actual values, and detailed implementation flowchart of
anchor cluster is shown in Fig. 5.
As shown in Fig. 5, firstly K initial clustering centers are selected
where the K is set to 9 in this paper. Then each sample point is assigned
to the nearest neighbor clusters, which is measured by the IoU(Intersec-
tion over Union)-based distance between clustering center and the
sample point, as provided in Eq. (4). Until the cluster centers are fixed
and no longer changed, k-means clustering is accomplished and genetic
algorithm will be applied to evolve the anchor settings. To be specific,
after parameter initialization (fitness function f, mutation probability
mp, variation coefficient σ and generations gen), mutation operation is
carried out to generate a new generation, which is then multiplied
with previously obtained clustering centers to calculate the f. Original
clustering center will be updated only when a larger fitness value is ob-
tained, and after 1000 iterations the update is terminated with final an-
chor size setting saved.
D Xi
ð Þ ¼ 1−IOU Xi, Ci
ð Þ ð4Þ
3.2. Details of FMD-Yolo network structure
Fig. 6 illustrates the framework and implementation details of pro-
posed face mask detection model FMD-Yolo. There are three major
components in designed deep network, which are Im-Res2Net-101,
En-PAN and Classification + Regression. In particular, Im-Res2Net-101
Fig. 4. The architecture of YoloV3 network.
Fig. 5. The anchor cluster algorithm flow used in this paper.
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
4
(101 denotes the number of layers) is a modified Res2Net structure
serving as the backbone of FMD-Yolo, which is responsible for
extracting feature from the inputs and have three-stage outputs de-
noted as C3, C4 and C5. En-PAN is a bidirectional fusion approach based
on PANet to enhance the detection performance of FMD-Yolo network
by further refining and fusing outputted information from backbone.
Meanwhile, the loss function of FMD-Yolo is a modification on that of
YoloV3, which will be introduced in next subsection. For a more com-
prehensive understanding, Fig. 7 and Fig. 8 present the designed Im-
Res2Net-101 and En-PAN architecture, respectively.
Im-Res2Net-101 contains a total of five stages, and FMD-Yolo uti-
lizes the output feature maps of the last three stages and passes them
Fig. 6. Overall structure of proposed FMD-Yolo.
Fig. 7. The structure of Im-Res2Net-101 backbone.
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
5
into the subsequent En-PAN feature fusion module. In stage 1, by using
three 3 × 3 convolutions, the learning ability is guaranteed with only a
small number of parameters. Then in stage 2, in order to retain as
much valid information in the feature map as possible, a 3 × 3 max
pooling operation with a stride size of 2 is performed after which the
output will enter an Im-Res2Net-a module for 3 times. From stage 3 to
5, Im-Res2Net-b module is used and repeated for 4, 23 and 3 times re-
spectively. Nonlocal operation is introduced in stage 3 and 4, and de-
formable convolution [45] is applied in stage 4 and 5. Since the size
and shape of human faces and masks in real scenes are changeable, a
fixed receptive field will lead to performance degradation. Nevertheless,
deformable convolution allows modeling capability of geometric trans-
formation by adding a two-dimensional offset values to conventional
convolution operation, which enables sampling region to be deformed
freely. The output feature maps of regular and deformable convolution
at each position c0 are denoted as RC and DC respectively, as described
in Eq. (5) and Eq. (6).
RC c0
ð Þ ¼
X
cn∈S
w cn
ð Þ  x c0 þ cn
ð Þ ð5Þ
DC c0
ð Þ ¼
X
cn∈S
w cn
ð Þ  x c0 þ cn þ Δcn
ð Þ ð6Þ
where S refers to the sampling grid area, cn represents each position
traversed on S, and wð  Þ is the weight function; Δ is the offset value
that displaces each point on area S by using offset Δcnjn ¼ 1, 2, ⋯, S
j j
f g.
In addition, to reduce loss of information after convolution, the right
branch of Im-Res2Net-b applies a 2 × 2 average pooling with stride
of 2 and a 1 × 1 convolution operation with stride of 1, rather than a
1 × 1 convolution with stride of 2. The Im-Res2Net-b module is followed
by a Spacetime Non-local block, which is essentially based on a self-
attention mechanism that makes the processed information not limited
to a local region [35], and thus captures the remote dependencies
directly by computing the interaction between any two positions.
“Spacetime” represents space-based and time-based information, that
is, the attention mechanism can also play its role in video sequences.
Non-local operation directly fuses the global information instead of sim-
ply stacking multiple convolutional layers and can generate richer se-
mantic information, which is equivalent to constructing a convolution
kernel with the same size of corresponding input feature map. Follow-
ing equation describes the work principle of Non-local operation.
yi ¼
1
C x
ð Þ
∑
∀j
f xi, xj
 
g xj
 
ð7Þ
where x and y refer to the input and output feature maps, respectively;
i and j represent the current and global position, which are both indices
of space and time. f(·) is the calculation of similarity between i and j,
where the Embedded Gaussian function is used; g(·) calculates the lin-
ear transformation representation of the feature map at j position,
which can be realized by 1 × 1 convolution; C(·) is a response factor,
and the final output y is obtained after standardization.
Fig. 8 illustrates the structure and implementation of En-PAN, which
adopts the bi-directional fusion of PANet and makes full advantages of
multi-scale information based on the Yolo Residual module to enhance
feature presentation ability. Except for a top-down path, a bottom-up
path is added and connected horizontally to fully integrate the overall
target information of both the high-level feature maps and the low-
level texture information. The top-down and bottom-up paths go
Fig. 8. The implementation of En-PAN structure.
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
6
through a total of five Yolo Residual modules to thoroughly improve the
robustness, and to make those feature maps match in size. Upsample
block and 3 × 3 convolution operation with stride of 2 have been em-
ployed in the top-down and bottom-up path, respectively. Based on
aforementioned three-stage outputs of C3, C4 and C5 of Im-Res2Net-
101, output of En-PAN can be obtained as:
P3 ¼ YR F3
ð Þ, F3 ¼ C3⊕Up F4
ð Þ;
P4 ¼ YR Co P3
ð Þ⊕ F4
ð Þ
ð Þ, F4 ¼ YR C4⊕Up F5
ð Þ
ð Þ
P5 ¼ YR Co P4
ð Þ⊕ F5
ð Þ
ð Þ, F5 ¼ YR C5
ð Þ
8



:
ð8Þ
where YR and Up denote the operations associated with the Yolo
Residual block and the UpSample block respectively, and Co represents
a 3 × 3 convolution with stride = 2.
Yolo Residual Block also has two branches, where the left side in-
cludes one 1 × 1 convolution and three Conv Blocks, while the right
side has one 1 × 1 convolution and SE (Squeeze and Excitation) Layer
[11]. Eventually output of both branches will be concatenated and
after a 1 × 1 convolution, final output is obtained. It should be pointed
out that applied Conv Block introduces Coordinate Convolution (Coord
Conv) [22], Spatial Pyramid Pooling (SPP) [7] and DropBlock [6] optimi-
zation strategies. In particular, Coord Conv adds two channels to the
input feature maps to represent the horizontal and vertical coordinates,
which makes the convolution spatially aware. Since the face mask de-
tection task in this paper is to output the bounding boxes by observing
the image pixels in Cartesian coordinate space, the application of Coord
Conv can solve the defects caused by ordinary convolution during spa-
tial transformation so as to improve the space perception ability.
DropBlock is a regularization method that can be deemed as a variant
of Dropout, which not only drops one unit in feature map, but also
neighbor regions together to prevent over-fitting problem and guaran-
tee the efficiency of regularization work, so it is more suitable for the
face mask detection task of FMD-Yolo. SPP further aggregates the fea-
tures obtained from the previous convolutional layers and can adapt
to changes in image target size and aspect ratio, which is proven a
multi-resolution strategy and makes the deformation more robust
against the target object. In this paper, SPP module is constructed by
using three parallel max-pooling layers with sizes of 5, 9 and 13 respec-
tively. Then inputs and outputs of the three layers are concatenated.
Moreover, SE Layer of Yolo Residual Block enables adaptive calibration
of feature channels by explicitly modeling the interdependence therein
through squeeze and excitation operations, allowing the network to se-
lectively enhance beneficial feature channels and suppress useless ones
by using global information, which is realized by one average pooling
and two fully-connected layers.
3.3. Loss function and post-processing methods
On the basis of cross-entropy loss function of original YoloV3 net-
work, proposed FMD-Yolo algorithm further introduces in a localization
loss, including IoU loss and IoU-aware loss. IoU measures the differences
between predicted bounding box and corresponding ground-truth,
which reflects the localization accuracy and is helpful for keeping
model performance in complex environments such as overlapping mul-
tiple targets. In this paper it is calculated as:
Liou ¼ 1−iou2
 
 wi ð9Þ
where wi is an introduced weight and takes value of 2.5.
IoU-aware loss further adopts an additional IoU-aware prediction
branch for better measure the localization accuracy [38], which can be
obtained by:
Liou−a ¼ −iou  log σ o
ð Þ
ð Þ− 1−iou
ð Þ  log 1−σ o
ð Þ
ð Þ ð10Þ
where iou denotes the intersection over union between the predicted
boxes and ground-truth, o is the output of IoU-aware branch, and σ is
the sigmoid activation function. So far, a localization loss is defined as
Lloc = Liou + Liou−a, and final loss function Ltotal applied in FMD-Yolo con-
sists of coordinates loss Lcod, objectness loss Lobj, classification loss Lcls
and localization loss Lloc:
Ltotal ¼ Lcod þ Lobj þ Lcls þ Lloc ð11Þ
Additionally, it is remarkable that in object detection area, non-
maximum suppression (NMS) technique is able to remove redundant
boxes by extracting target detection boxes with high confidence and
suppressing others to finally obtain the true optimal prediction results.
Applying a suitable NMS method not only facilitates improving the effi-
ciency, but also enhances the prediction accuracy. Hence, in the infer-
ence phase of FMD-Yolo, Matrix NMS [36] is selected as the post-
processing method, which uses the sorted upper triangular IoU matrix
for parallelization operation to improve the detection accuracy, and
can effectively prevent overlapping similar object prediction boxes
from conflicting with each other.
4. Experiments and discussions
In this section, proposed FMD-Yolo is evaluated on two public
human face mask detection datasets, and the experimental results are
reported and discussed in detail. At first, a brief introduction of our ex-
perimental preparation is given, including information of applied data-
bases, evaluation metircs and parameterization. Then, FMD-Yolo
algorithm is comprehensively analyzed and compared with other
state-of-the-art deep learning methods.
4.1. Experiment environment and settings
4.1.1. Applied datasets
In this paper, two face mask datasets have been selected to
veriry the effectiveness of FMD-Yolo, which are publicly available at
website https://www.kaggle.com/andrewmvd/face-mask-detection
and https://aistudio.baidu.com/aistudio/datasetdetail/24982. For con-
venience, they are denoted as MD-2 and MD-3, respectively. In particu-
lar, MD-2 contains two categories face (a1) and face _ mask (a2)
referring to faces and faces with masks, respectively; whereas MD-3 in-
cludes three categories, which are without _ mask (b1), with _ mask (b2)
and mask _ weared _ incorrect (b3), where the last one indicates the
mask is worn incorrectly. Table 1 provides the category distrubution
of each database and the partition for network training.
4.1.2. Evaluation metrics
As face mask detection is essentially a classification and localization
task, typical indicators have been used for evaluation including true pos-
itive (TP), true negative (TN), false positive (FP) and false negative (FN),
based on which Precision and Recall are defined as follows:
Precision ¼
TP
TP þ FP
ð12Þ
Table 1
Datasets information.
Numbers MD-2 MD-3
Training set 6362 683
Validation set 1590 170
Category a1 / b1 of training set 9937 567
Category a2 / b2 of training set 3212 2388
Category b3 of training set – 107
Mean boxes per image 2.067 4.490
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
7
Recall ¼
TP
TP þ FN
ð13Þ
Furthermore, intersection over union (IoU) is adopted for evaluation
which denotes the ratio of overlapping area between prediction boxes
and corresponding ground-truth, and larger IoU value reflects a more
precise localization so IoU = 1 will be the best case. Combining with
IoU value, AP50 and AP75 is applied to report average precision at IoU
= 0.5 and IoU = 0.75 level; similarly, AR50 means average recall at
IoU = 0.5. mAP and mAR report the average of 10 precision and recall
values at IoU varies from 0.5 to 0.95 with 0.05 interval, respectively.
As for detailed performance on each category, indicators cAP50 and
cAR50 have been used, and c is the abbreviation of category. In addition,
P-R curves are plotted for each category with the horizontal coordinate
of precision and the vertical coordinate of recall to further evaluate the
comprehensive performance of the face mask detection models.
4.1.3. Experimental settings
Table 2 presents the experimental settings of proposed FMD-Yolo al-
gorithm on both MD-2 and MD-3 databases. In order to further demon-
strate the superiority of FMD-Yolo, it has been compared with other
state-of-the-art object detection algorithms including Faster RCNN
baseline [26], Faster RCNN with FPN [19], YoloV3 [29], YoloV4 [3],
RetinaNet [20], FCOS [34], EfficientDet [33] and HRNet [31]. For a fair
comparison, training and validation datasets maintain the same and
the best convergent model of each algorithm are selected, and all exper-
iments are run under the deep learning framework PaddlePaddle 2.1.2
with a Tesla V100 GPU (32GB). In other words, the eight comparison al-
gorithms and FMD Yolo have the same hardware configuration and soft-
ware environment.
4.2. Experimental results and discussions
4.2.1. Overall results and discussions
Table 3 reports the experimental results on MD-2 database, where
proposed FMD-Yolo and other eight advanced detection algorithms
are compared on five evaluation metrics of mAP, AP50, AP75, mAR and
AR50. As can be seen, FMD-Yolo algorithm performs best on all indica-
tors with mAP, AP50, AP75, mAR and AR50 values of 66.4%, 92.0%,
80.0%, 77.4% and 95.5%. Compared with corresponding model that
ranks second on each indicator, FMD-Yolo has made an improvement
of 4.6%, 1.8%, 5.5%, 10.0% and 2.3% respectively, which demonstrates
the superiority of proposed FMD-Yolo in this two-category detection
problem.
According to Eq. (12) and Eq. (13), precision measures the accuracy
of model prediction results and recall reflects the ability of identifying
TP instances in all positive samples. Notice that at a larger IoU value, im-
provement brought by FMD-Yolo on AP75 is even 3.7% higher than that
on AP50, which indicates that FMD-Yolo satisfies the higher localization
accuracy with a more significant performance advantage. It is mainly
because IoU and IoU-aware loss have been adopted in the training pro-
cess of FMD-Yolo that prediction results are more precise. In terms of
mAR which exceeds the second one by 10%, it can be inferred that pre-
processing phase of FMD-Yolo model have contributed the most, in-
cluding various data augmentation methods and the Anchor Cluster
algorithm combining k-means and genetic algorithms, which matches
the best size settings of anchors. In addition, comparison with classical
YoloV3 and YoloV4 models has demonstrated the effectiveness of de-
signed feature extractor and fusion structure in FMD-Yolo.
In Table 4, experimental results on MD-3 database have been re-
ported. Notice that compared with MD-2, MD-3 has an extra category
which leads to more difficulty in recognition. Therefore, some algo-
rithms such as EfficientDet have presented performance degradation
while working on MD-3 than that on MD-2, as can be seen from
Table 3 and Table 4. However, proposed FMD-Yolo still performs best,
which improves mAP, AP50, AP75, mAR and AR50 by 3.9%, 3.8%, 3.9%,
7.1%, and 0.2% respectively, compared with corresponding model that
ranks second. It can be concluded that FMD-Yolo algorithm has so
strong robustness that it can present overwhelming advantages in
both two-category and three-category face mask detection tasks,
which is suitable for application in more complex and variable real-
world scenarios.
In order to further intuitively analyze and compare the performance
of each algorithm in face mask detection task, Fig. 9 shows the overall P-
R curves for each algorithm on MD-2 and MD-3 datasets where IoU =
0.5 is selected as the threshold to divide positive and negative samples.
For any two models, if the P-R curve of one is completely wrapped by
that of another, the latter is deemed better. In addition, the superiority
of individual models can be estimated based on the area enclosed by
P-R curve and coordinate axis or based on the break even point (BEP).
The so-called BEP is the coordinate value of the P-R curve of each
model when the precision and recall are equal, which is the intersection
of P-R curve and the precision = recall line. As illustrated in Fig. 9, both
area under P-R curve and BEP of FMD-Yolo method are significantly bet-
ter than those of other algorithms regardless of the database, which fur-
ther indicates that FMD-Yolo does have outstanding performance in
face mask detection applications.
4.2.2. Category-wise results and analysis
In this subsection, more in-depth category-wise discussions are pre-
sented. Table 5 provides the evaluation results on MD-2 database,
Table 2
Datasets information.
Parameters MD-2 MD-3
Max Iterations 120,000 150,000
Base Learning Rate 0.000625
PiecewiseDecay Iters 110,000 130,000
Warmup Steps 4000
Optimizer SGD with Momentum (factor = 0.9)
Regularizer L2 (factor = 0.0005)
Train Batch Size 6
Table 3
Performance comparison of FMD-Yolo and other eight detection algorithms on MD-2
dataset.
Methods
Evaluation metrics
mAP AP50 AP75 mAR AR50
Faster RCNN baseline 0.565 0.869 0.668 0.623 0.895
Faster RCNN with FPN 0.597 0.886 0.713 0.655 0.900
Yolo V3 0.574 0.888 0.659 0.670 0.929
Yolo V4 0.599 0.899 0.718 0.661 0.920
RetinaNet 0.616 0.887 0.726 0.664 0.909
FCOS 0.605 0.894 0.710 0.674 0.932
EfficientDet 0.613 0.870 0.726 0.665 0.910
HRNet 0.618 0.902 0.745 0.671 0.916
FMD-Yolo (ours) 0.664 0.920 0.800 0.774 0.955
Table 4
Performance comparison of FMD-Yolo and other eight detection algorithms on MD-3
dataset.
Methods
Evaluation metrics
mAP AP50 AP75 mAR AR50
Faster RCNN baseline 0.521 0.821 0.604 0.601 0.889
Faster RCNN with FPN 0.536 0.846 0.551 0.615 0.907
Yolo V3 0.507 0.824 0.585 0.609 0.927
Yolo V4 0.525 0.843 0.591 0.587 0.897
RetinaNet 0.489 0.792 0.536 0.579 0.875
FCOS 0.521 0.796 0.558 0.628 0.916
EfficientDet 0.454 0.699 0.518 0.569 0.841
HRNet 0.512 0.802 0.556 0.571 0.845
FMD-Yolo (ours) 0.575 0.884 0.643 0.699 0.929
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
8
where precision cAP50 and average recall cAR50 at IoU = 0.5 of catego-
ries face(a1) and face _ mask(a2) are reported. From the results, it is
found that all methods have better performence on detecting a2 cate-
gory. In particular, FMD-Yolo presents a 10.8% higher cAP50 and 8.7%
better cAR50 value on a2 than those of a1. But even regarding to the
a1 class, FMD-Yolo also performs the best results, which outperforms
the second one by 3.2% and 4.1% on cAP50 and cAR50, respectively.
Similarly, Table 6 reports the category-wise evaluation results on
MD-3 dataset. Obviously, the category b3 is the most difficult one for ac-
curate detection. FMD-Yolo takes the first place in terms of precision on
each category, specifically on b3; the cAP50 value is even 3.3% higher
than the second one, which demonstrates the superority of FMD-Yolo
in complicated detection work. As for cAR50, FMD-Yolo ranks fourth
on category b3 and best on the others. Slightly loss of cAR50 value on
b3 may be resulted from smaller number of this category in the valida-
tion set, and the fact that FMD-Yolo model pays more attention to
higher quality detection with larger IoU. But still, FMD-Yolo presents a
satisfactory performance on this three-category detection problem.
Furthermore, as shown in Fig. 10, the P-R curves of FMD-Yolo are il-
lustrated on categories a1, a2 in MD-2 dataset, and categories b1, b2, b3
in MD-3 dataset. Notice that the values of BEP (break even point) and
the area under the curves are satisfied with a2  a1 and b2  b1  b3,
which verifies the reasonability of above discussions. It is also worth
noting that both category a2 and b2 belong to the face wearing mask
type, where FMD-Yolo performs well with cAP50 of 97.4% and 93.9% re-
spectively, and this indicates that in real-world application FMD-Yolo is
capable of accurately identifying people who have worn a mask.
5. Conclusion
In this paper, an efficient automatic face mask recognition and detec-
tion framework FMD-Yolo and corresponding algorithm are proposed.
In particular, a modified Res2Net structure Im-Res2Net-101 serves as
the backbone to extract features with rich receptive fields from the
input. Subsequently there follows a feature fusion component En-PAN,
which is a novel path aggregation network and primarily consists of
Fig. 9. Overall P-R curves for each model on MD-2 and MD-3 datasets (IoU=0.5).
Table 5
Performance comparison on each category of the MD-2 dataset.
Methods
face(a1) face _ m ask(a2)
cAP50 cAR50 cAP50 cAR50
Faster RCNN baseline 0.776 0.811 0.962 0.979
Faster RCNN with FPN 0.811 0.824 0.961 0.975
Yolo V3 0.804 0.863 0.971 0.994
Yolo V4 0.829 0.855 0.969 0.985
RetinaNet 0.796 0.825 0.978 0.993
FCOS 0.819 0.870 0.969 0.994
EfficientDet 0.762 0.826 0.979 0.993
HRNet 0.834 0.852 0.971 0.980
FMD-Yolo (ours) 0.866 0.911 0.974 0.998
Table 6
Performance comparison on each category of the MD-3 dataset.
Methods
Category b1 / b2 / b3
cAP50 cAR50
Faster RCNN baseline 0.817 / 0.917 / 0.730 0.860 / 0.930 / 0.875
Faster RCNN with FPN 0.838 / 0.913 / 0.787 0.860 / 0.922 / 0.938
Yolo V3 0.864 / 0.928 / 0.681 0.900 / 0.943 / 0.938
Yolo V4 0.840 / 0.903 / 0.785 0.887 / 0.928 / 0.875
RetinaNet 0.774 / 0.870 / 0.732 0.800 / 0.886 / 0.938
FCOS 0.842 / 0.917 / 0.627 0.933 / 0.940 / 0.875
EfficientDet 0.696 / 0.870 / 0.530 0.793 / 0.917 / 0.812
HRNet 0.825 / 0.915 / 0.666 0.860 / 0.925 / 0.750
FMD-Yolo (ours) 0.892 / 0.939 / 0.820 0.947 / 0.964/ 0.875
Fig. 10. P-R curves of FMD-Yolo on each category (IoU=0.5).
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
9
Yolo Residual Block, SPP, Coord Conv, SE Layer blocks. In En-PAN, high-
level semantic information and low-level details can be sufficiently
merged so that highly distinctive feature with strong robustness can
be constructed. In addition, the localization loss function including IoU
and IoU-aware loss is applied for training FMD-Yolo, and parallel com-
puting method Matrix NMS is applied in the post-processing stage,
which greatly enhances the model inference efficiency. Benchmark
evaluations on two publicly available datasets and comparison with
other eight state-of-the-art object detection algorithms have demon-
strated the effectiveness and superiority of FMD-Yolo. In future work,
we will further enhance the generalization ability of FMD-Yolo so as
to enable applications in various real-world detection scenarios, which
can be achieved by designing a more efficient feature fusion model
with channel and spatial self-attention mechanism.
Declaration of Competing Interest
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to influ-
ence the work reported in this paper.
References
[1] A. Bassi, B.M. Henry, L. Pighi, L. Leone, G. Lippi, Evaluation of indoor hospital acclima-
tization of body temperature before COVID-19 fever screening, J. Hosp. Infect. 112
(2021) 127–128.
[2] V. Bazarevsky, Y. Kartynnik, A. Vakunov, K. Raveendran, M. Grundmann, Blazeface:
Sub-Millisecond Neural Face Detection on Mobile Gpus, 2019 arXiv preprint,
arXiv:1907.05047.
[3] A. Bochkovskiy, C. Wang, H. Liao, YOLOv4: optimal speed and accuracy of object de-
tection, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2020.
[4] C. Chinrungrueng, C. Sequin, Optimal adaptive k-means algorithm with dynamic ad-
justment of learning rate, IJCNN-91-Seattle Int. Jt. Conf. Neural Netw. 1 (1991)
855–862.
[5] S. Gao, M. Cheng, K. Zhao, X. Zhang, M. Yang, P. Torr, Res2Net: a new multi-scale
backbone architecture, IEEE Trans. Pattern Anal. Mach. Intell. 43 (2019) 652–662.
[6] G. Ghiasi, T. Lin, Q. Le, DropBlock: a regularization method for convolutional net-
works, Proceedings of the 32nd International Conference on Neural Information
Processing Systems (NIPS) 2018, pp. 10750–10760.
[7] K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional net-
works for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell. 37 (9) (2015)
1904–1916.
[8] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4, 2016,
pp. 770–778.
[9] G. Huang, Z. Liu, L. van der Maaten, K. Weinberger, Densely connected convolutional
networks, 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) 2017, pp. 2261–2269.
[10] R. Hendry, Chen, automatic license plate recognition via sliding-window darknet-
YOLO deep learning, Image Vis. Comput. 87 (2019) 47–56.
[11] J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks, IEEE
Trans. Pattern Anal. Mach. Intell. 42 (8) (2020) 2011–2023.
[12] D. Jain, Z. Zhang, K. Huang, Random walk-based feature learning for micro-
expression recognition, Pattern Recogn. Lett. 115 (2018) 92–100.
[13] D. Jain, Z. Zhang, K. Huang, Multi angle optimal pattern-based deep learning for au-
tomatic facial expression recognition, Pattern Recogn. Lett. 139 (2020) 157–165.
[14] A. Krizhevsky, I. Sutskever, G. Hinton, ImageNet classification with deep
convolutional neural networks, Neural Inf. Process. Syst. (2012) 1097–1105.
[15] U. Mahbub, S. Sarkar, R. Chellappa, Partial face detection in the mobile domain,
Image Vis. Comput. 82 (2019) 1–17.
[16] S. Meivel, K. Devi, S. Maheswari, J. Menaka, Real time data analysis of face mask de-
tection and social distance measurement using matlab, Mater. Today Proc. (2021).
[17] H. Law, J. Deng, CornerNet: detecting objects as paired keypoints, Proceedings of the
European Conference on Computer Vision (ECCV) 2018, pp. 734–750.
[18] M. Liao, H. Liu, X. Wang, X. Hu, Y. Huang, X. Liu, K. Brenan, J. Mecha, M. Nirmalan, J.
Lu, A technical review of face mask wearing in preventing respiratory COVID-19
transmission, Curr. Opin. Colloid Interface Sci. 52 (2021).
[19] T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid net-
works for object detection, 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2017, pp. 936–944.
[20] T. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection, IEEE
Trans. Pattern Anal. Mach. Intell. 42 (2) (2020) 318–327.
[21] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, A. CBerg, SSD: single shot
multibox detector, Proceedings of the European Conference on Computer Vision
(ECCV) 2016, pp. 21–37.
[22] R. Liu, J. Lehman, P. Molino, F. Such, E. Frank, A. Sergeev, J. Yosinski, An intriguing
failing of convolutional neural networks and the coordconv solution, Proceedings
of the 32nd International Conference on Neural Information Processing Systems
(NIPS) 2018, pp. 9628–9639.
[23] S. Liu, L. Qi, H. Qin, J. Shi, J. Jia, Path aggregation network for instance segmentation,
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018,
pp. 8759–8768.
[24] M. Loey, G. Manogaran, M. Taha, N. Khalifad, Fighting against COVID-19: a novel
deep learning model based on YOLO-v2 with ResNet-50 for medical face mask de-
tection, Sustain. Citi Soc. 65 (2021).
[25] A. Rahmani, S. Mirmahaleh, Coronavirus disease (COVID-19) prevention and treat-
ment methods and effective parameters: a systematic literature review, Sustain.
Citi. Soc. 64 (2020).
[26] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: towards real-time object detection
with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell. 39 (6)
(2017) 1137–1149.
[27] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real-time
object detection, 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) 2016, pp. 779–788.
[28] J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, 2017 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR) 2017, pp. 6517–6525.
[29] J. Redmon, A. Farhadi, YOLOv3: An Incremental Improvement, 2018 arXiv:
1804.02767, [online] Available: https://arxiv.org/abs/1804.02767.
[30] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image
recognition, Int. Conf. Learn. Represent. 2015 (2015) 1–14.
[31] K. Sun, B. Xiao, D. Liu, J. Wang, Deep high-resolution representation learning for
human pose estimation, 2019 IEEE Conference on Computer Vision and Pattern Rec-
ognition (CVPR) 2019, pp. 5693–5703.
[32] S. Singh, U. Ahuja, M. Kumar, K. Kumar, M. Sachdeva, Face mask detection using
YOLOv3 and faster R-CNN models: COVID-19 environment, Multimedia Tools
Appl. 80 (2021) 19753–19768.
[33] M. Tan, R. Pang, Q. Le, EfficientDet: scalable and efficient object detection, 2020 IEEE/
CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020,
pp. 10781–10790.
[34] Z. Tian, C. Shen, H. Chen, T. He, FCOS: fully convolutional one-stage object detection,
2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019,
pp. 9626–9635.
[35] X. Wang, R. Girshick, A. Gupta, K. He, Non-local neural networks, 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition 2018, pp. 7794–7803.
[36] X. Wang, R. Zhang, K. Tao, Lei. Li, C. Shen, SOLOv2: dynamic and fast instance seg-
mentation, Adv. Neural Inf. Process. Syst. 33 (2020) 17721–17732.
[37] D. Wu, S. Lv, M. Jiang, H. Song, Using channel pruning-based YOLO v4 deep learning
algorithm for the real-time and accurate detection of apple flowers in natural envi-
ronments, Comput. Electron. Agric. 178 (2020).
[38] S. Wu, X. Li, X. Wang, IoU-aware single-stage object detector for accurate localiza-
tion, Image Vis. Comput. 97 (2020).
[39] X. Ye, D. Hong, H. Chen, P. Hsiao, L. Fu, A two-stage real-time YOLOv2-based road
marking detector with lightweight spatial transformation-invariant classification,
Image Vis. Comput. 102 (2020).
[40] F. Yu, D. Wang, E. Shelhamer, T. Darrell, Deep layer aggregation, Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition 2018, pp. 2403–2412.
[41] N. Zeng, H. Zhang, B. Song, W. Liu, Y. Li, A.M. Dobaie, Facial expression recognition
via learning deep sparse autoencoders, Neurocomputing 273 (2018) 643–649.
[42] N. Zeng, H. Li, Y. Peng, A New Deep Belief Network-based Multi-task Learning for
Diagnosis of Alzheimers’ Disease, Neural Computing and Applications, 2021
https://doi.org/10.1007/s00521-021-06149-6.
[43] H. Zhang, M. Cisse, Y. Dauphin, D. Lopez-Paz, Mixup: Beyond empirical risk minimi-
zation, Int. Conf. Learn. Represent. (2018).
[44] G. Zheng, Y. Xu, Efficient face detection and tracking in video sequences based on
deep learning, Inf. Sci. 568 (2021) 265–285.
[45] X. Zhu, H. Hu, S. Lin, J. Dai, Deformable convnets V2: more deformable, better results,
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
2019, pp. 9300–9308.
P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341
10

Más contenido relacionado

Similar a 7.Mask detection.pdf

Facemask_project (1).pptx
Facemask_project (1).pptxFacemask_project (1).pptx
Facemask_project (1).pptxJosh Josh
 
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...DeepMask Transforming Face Mask Identification for Better Pandemic Control in...
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...ijtsrd
 
Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...
Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...
Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...IRJET Journal
 
Face Mask and Social Distance Detection
Face Mask and Social Distance DetectionFace Mask and Social Distance Detection
Face Mask and Social Distance DetectionIRJET Journal
 
Real Time Face Mask Detection
Real Time Face Mask DetectionReal Time Face Mask Detection
Real Time Face Mask DetectionIRJET Journal
 
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...IRJET Journal
 
NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...
NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...
NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...IRJET Journal
 
PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...
PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...
PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...ijaia
 
Social distance and face mask detector system exploiting transfer learning
Social distance and face mask detector system exploiting  transfer learningSocial distance and face mask detector system exploiting  transfer learning
Social distance and face mask detector system exploiting transfer learningIJECEIAES
 
Real Time Social Distance Detector using Deep learning
Real Time Social Distance Detector using Deep learningReal Time Social Distance Detector using Deep learning
Real Time Social Distance Detector using Deep learningIRJET Journal
 
Face Mask Detection group 14.pptx
Face Mask Detection group 14.pptxFace Mask Detection group 14.pptx
Face Mask Detection group 14.pptxNavyaParashir
 
FACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORK
FACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORKFACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORK
FACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORKmlaij
 
Designing of Application for Detection of Face Mask and Social Distancing Dur...
Designing of Application for Detection of Face Mask and Social Distancing Dur...Designing of Application for Detection of Face Mask and Social Distancing Dur...
Designing of Application for Detection of Face Mask and Social Distancing Dur...IRJET Journal
 
FACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSING
FACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSINGFACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSING
FACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSINGIRJET Journal
 
Intelligent System For Face Mask Detection
Intelligent System For Face Mask DetectionIntelligent System For Face Mask Detection
Intelligent System For Face Mask DetectionIRJET Journal
 
Vision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdf
Vision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdfVision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdf
Vision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdfVINEYCHHILLAR
 
FACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUE
FACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUEFACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUE
FACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUEIRJET Journal
 
A Survey on Person Detection for Social Distancing and Safety Violation Alert...
A Survey on Person Detection for Social Distancing and Safety Violation Alert...A Survey on Person Detection for Social Distancing and Safety Violation Alert...
A Survey on Person Detection for Social Distancing and Safety Violation Alert...IRJET Journal
 
Covid Mask Detection and Social Distancing Using Raspberry pi
Covid Mask Detection and Social Distancing Using Raspberry piCovid Mask Detection and Social Distancing Using Raspberry pi
Covid Mask Detection and Social Distancing Using Raspberry piIRJET Journal
 
Covid Detection Using Lung X-ray Images
Covid Detection Using Lung X-ray ImagesCovid Detection Using Lung X-ray Images
Covid Detection Using Lung X-ray ImagesIRJET Journal
 

Similar a 7.Mask detection.pdf (20)

Facemask_project (1).pptx
Facemask_project (1).pptxFacemask_project (1).pptx
Facemask_project (1).pptx
 
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...DeepMask Transforming Face Mask Identification for Better Pandemic Control in...
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...
 
Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...
Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...
Survey on Face Mask Detection with Door Locking and Alert System using Raspbe...
 
Face Mask and Social Distance Detection
Face Mask and Social Distance DetectionFace Mask and Social Distance Detection
Face Mask and Social Distance Detection
 
Real Time Face Mask Detection
Real Time Face Mask DetectionReal Time Face Mask Detection
Real Time Face Mask Detection
 
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
 
NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...
NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...
NEW CORONA VIRUS DISEASE 2022: SOCIAL DISTANCING IS AN EFFECTIVE MEASURE (COV...
 
PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...
PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...
PERFORMANCE EVALUATION OF YOLOV4 AND YOLOV4- TINY FOR REAL-TIME FACE-MASK DET...
 
Social distance and face mask detector system exploiting transfer learning
Social distance and face mask detector system exploiting  transfer learningSocial distance and face mask detector system exploiting  transfer learning
Social distance and face mask detector system exploiting transfer learning
 
Real Time Social Distance Detector using Deep learning
Real Time Social Distance Detector using Deep learningReal Time Social Distance Detector using Deep learning
Real Time Social Distance Detector using Deep learning
 
Face Mask Detection group 14.pptx
Face Mask Detection group 14.pptxFace Mask Detection group 14.pptx
Face Mask Detection group 14.pptx
 
FACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORK
FACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORKFACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORK
FACE MASK DETECTION MODEL USING CONVOLUTIONAL NEURAL NETWORK
 
Designing of Application for Detection of Face Mask and Social Distancing Dur...
Designing of Application for Detection of Face Mask and Social Distancing Dur...Designing of Application for Detection of Face Mask and Social Distancing Dur...
Designing of Application for Detection of Face Mask and Social Distancing Dur...
 
FACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSING
FACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSINGFACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSING
FACE MASK DETECTION USING MACHINE LEARNING AND IMAGE PROCESSING
 
Intelligent System For Face Mask Detection
Intelligent System For Face Mask DetectionIntelligent System For Face Mask Detection
Intelligent System For Face Mask Detection
 
Vision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdf
Vision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdfVision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdf
Vision-based_Approach_for_Automated_Social_Distance_Violators_Detection.pdf
 
FACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUE
FACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUEFACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUE
FACEMASK AND PHYSICAL DISTANCING DETECTION USING TRANSFER LEARNING TECHNIQUE
 
A Survey on Person Detection for Social Distancing and Safety Violation Alert...
A Survey on Person Detection for Social Distancing and Safety Violation Alert...A Survey on Person Detection for Social Distancing and Safety Violation Alert...
A Survey on Person Detection for Social Distancing and Safety Violation Alert...
 
Covid Mask Detection and Social Distancing Using Raspberry pi
Covid Mask Detection and Social Distancing Using Raspberry piCovid Mask Detection and Social Distancing Using Raspberry pi
Covid Mask Detection and Social Distancing Using Raspberry pi
 
Covid Detection Using Lung X-ray Images
Covid Detection Using Lung X-ray ImagesCovid Detection Using Lung X-ray Images
Covid Detection Using Lung X-ray Images
 

Último

Models Call Girls Electronic City | 7001305949 At Low Cost Cash Payment Booking
Models Call Girls Electronic City | 7001305949 At Low Cost Cash Payment BookingModels Call Girls Electronic City | 7001305949 At Low Cost Cash Payment Booking
Models Call Girls Electronic City | 7001305949 At Low Cost Cash Payment Bookingnarwatsonia7
 
Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...
Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...
Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...ggsonu500
 
Russian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment Booking
Russian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment BookingRussian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment Booking
Russian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment Bookingnarwatsonia7
 
2024 HCAT Healthcare Technology Insights
2024 HCAT Healthcare Technology Insights2024 HCAT Healthcare Technology Insights
2024 HCAT Healthcare Technology InsightsHealth Catalyst
 
Pregnancy and Breastfeeding Dental Considerations.pptx
Pregnancy and Breastfeeding Dental Considerations.pptxPregnancy and Breastfeeding Dental Considerations.pptx
Pregnancy and Breastfeeding Dental Considerations.pptxcrosalofton
 
Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...
Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...
Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...ddev2574
 
Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...
Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...
Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...narwatsonia7
 
Call Girl Bangalore Aashi 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Aashi 7001305949 Independent Escort Service BangaloreCall Girl Bangalore Aashi 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Aashi 7001305949 Independent Escort Service Bangalorenarwatsonia7
 
Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...satishsharma69855
 
Call Girls Hsr Layout Whatsapp 7001305949 Independent Escort Service
Call Girls Hsr Layout Whatsapp 7001305949 Independent Escort ServiceCall Girls Hsr Layout Whatsapp 7001305949 Independent Escort Service
Call Girls Hsr Layout Whatsapp 7001305949 Independent Escort Servicenarwatsonia7
 
Soft Toric contact lens fitting (NSO).pptx
Soft Toric contact lens fitting (NSO).pptxSoft Toric contact lens fitting (NSO).pptx
Soft Toric contact lens fitting (NSO).pptxJasmin Modi
 
Hi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbers
Hi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbersHi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbers
Hi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbersnarwatsonia7
 
MVP Health Care City of Schenectady Presentation
MVP Health Care City of Schenectady PresentationMVP Health Care City of Schenectady Presentation
MVP Health Care City of Schenectady PresentationMVP Health Care
 
SARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdf
SARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdfSARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdf
SARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdfDolisha Warbi
 
Call Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts Service
Call Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts ServiceCall Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts Service
Call Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts Servicenarwatsonia7
 
College Call Girls Mumbai Alia 9910780858 Independent Escort Service Mumbai
College Call Girls Mumbai Alia 9910780858 Independent Escort Service MumbaiCollege Call Girls Mumbai Alia 9910780858 Independent Escort Service Mumbai
College Call Girls Mumbai Alia 9910780858 Independent Escort Service Mumbaisonalikaur4
 
Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 ) unlimited hard...
Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 )  unlimited hard...Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 )  unlimited hard...
Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 ) unlimited hard...ggsonu500
 
Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...sandeepkumar69420
 

Último (20)

Models Call Girls Electronic City | 7001305949 At Low Cost Cash Payment Booking
Models Call Girls Electronic City | 7001305949 At Low Cost Cash Payment BookingModels Call Girls Electronic City | 7001305949 At Low Cost Cash Payment Booking
Models Call Girls Electronic City | 7001305949 At Low Cost Cash Payment Booking
 
Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...
Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...
Gurgaon Sector 68 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...
 
Russian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment Booking
Russian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment BookingRussian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment Booking
Russian Call Girls Sadashivanagar | 7001305949 At Low Cost Cash Payment Booking
 
2024 HCAT Healthcare Technology Insights
2024 HCAT Healthcare Technology Insights2024 HCAT Healthcare Technology Insights
2024 HCAT Healthcare Technology Insights
 
Pregnancy and Breastfeeding Dental Considerations.pptx
Pregnancy and Breastfeeding Dental Considerations.pptxPregnancy and Breastfeeding Dental Considerations.pptx
Pregnancy and Breastfeeding Dental Considerations.pptx
 
Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...
Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...
Rohini Sector 6 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few Cl...
 
Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...
Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...
Hi,Fi Call Girl In Whitefield - [ Cash on Delivery ] Contact 7001305949 Escor...
 
Call Girl Bangalore Aashi 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Aashi 7001305949 Independent Escort Service BangaloreCall Girl Bangalore Aashi 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Aashi 7001305949 Independent Escort Service Bangalore
 
Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Delhi Cantt | 9711199171 | High Profile -New Model -Availa...
 
Call Girls Hsr Layout Whatsapp 7001305949 Independent Escort Service
Call Girls Hsr Layout Whatsapp 7001305949 Independent Escort ServiceCall Girls Hsr Layout Whatsapp 7001305949 Independent Escort Service
Call Girls Hsr Layout Whatsapp 7001305949 Independent Escort Service
 
Soft Toric contact lens fitting (NSO).pptx
Soft Toric contact lens fitting (NSO).pptxSoft Toric contact lens fitting (NSO).pptx
Soft Toric contact lens fitting (NSO).pptx
 
Hi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbers
Hi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbersHi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbers
Hi,Fi Call Girl In Marathahalli - 7001305949 with real photos and phone numbers
 
Russian Call Girls Jor Bagh 9711199171 discount on your booking
Russian Call Girls Jor Bagh 9711199171 discount on your bookingRussian Call Girls Jor Bagh 9711199171 discount on your booking
Russian Call Girls Jor Bagh 9711199171 discount on your booking
 
MVP Health Care City of Schenectady Presentation
MVP Health Care City of Schenectady PresentationMVP Health Care City of Schenectady Presentation
MVP Health Care City of Schenectady Presentation
 
SARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdf
SARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdfSARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdf
SARS (SEVERE ACUTE RESPIRATORY SYNDROME).pdf
 
Call Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts Service
Call Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts ServiceCall Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts Service
Call Girl Service ITPL - [ Cash on Delivery ] Contact 7001305949 Escorts Service
 
College Call Girls Mumbai Alia 9910780858 Independent Escort Service Mumbai
College Call Girls Mumbai Alia 9910780858 Independent Escort Service MumbaiCollege Call Girls Mumbai Alia 9910780858 Independent Escort Service Mumbai
College Call Girls Mumbai Alia 9910780858 Independent Escort Service Mumbai
 
Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 ) unlimited hard...
Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 )  unlimited hard...Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 )  unlimited hard...
Gurgaon Sushant Lok Phase 2 Call Girls Service ( 9873940964 ) unlimited hard...
 
Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...
Russian Call Girls Mohan Nagar | 9711199171 | High Profile -New Model -Availa...
 
Russian Call Girls South Delhi 9711199171 discount on your booking
Russian Call Girls South Delhi 9711199171 discount on your bookingRussian Call Girls South Delhi 9711199171 discount on your booking
Russian Call Girls South Delhi 9711199171 discount on your booking
 

7.Mask detection.pdf

  • 1. FMD-Yolo: An efficient face mask detection method for COVID-19 prevention and control in public☆ Peishu Wu a , Han Lia , Nianyin Zenga, ⁎, Fengping Lib a Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China b Institute of Laser and Optoelectronics Intelligent Manufacturing, Wenzhou University, Wenzhou 325035, China a b s t r a c t a r t i c l e i n f o Article history: Received 31 August 2021 Received in revised form 8 October 2021 Accepted 19 November 2021 Available online 25 November 2021 Keywords: Face mask detection COVID-19 Improved YoloV3 algorithm Feature extraction and fusion Coronavirus disease 2019 (COVID-19) isa world-wide epidemic and efficientprevention and control of thisdisease has become the focus of global scientific communities. In this paper, a novel face mask detection framework FMD- Yolo is proposed to monitor whether people wear masks in a right way in public, which is an effective way to block the virus transmission. In particular, the feature extractor employs Im-Res2Net-101 which combines Res2Net module and deep residual network, where utilization of hierarchical convolutional structure, deformable convolu- tion and non-local mechanisms enables thorough information extraction from the input. Afterwards, an enhanced path aggregation network En-PAN is applied for feature fusion, where high-level semantic information and low-level details are sufficiently merged so that the model robustness and generalization ability can be enhanced. Moreover, localization loss isdesigned and adoptedinmodel training phase,and Matrix NMS method isusedinthe inference stage to improve the detection efficiency and accuracy. Benchmark evaluation is performed on two pub- lic databases with the results compared with other eight state-of-the-art detection algorithms. At IoU = 0.5 level, proposed FMD-Yolo has achieved the best precision AP50 of 92.0% and 88.4% on the two datasets, and AP75 at IoU = 0.75 has improved 5.5% and 3.9% respectively compared with the second one, which demonstrates the superi- ority of FMD-Yolo in face mask detection with both theoretical values and practical significance. © 2021 Elsevier B.V. All rights reserved. 1. Introduction Coronavirus Disease 2019 (COVID-19) is a world-wide spreading ep- idemic, which severely threatens the life of all human beings since its outbreak. Recently, the reported mutation of COVID-19 even worsens the situation. How to handle this disaster has become a focused issue in global scientific community and many efforts have been made, such as the development of effective vaccine, timely treatment approach, etc. According to the researches [25], COVID-19 mainly spreads by drop- let and aerosol transmission, which can occur in the social interaction with an infected person. Therefore, proper wearing of protective masks becomes a scientific and available way of blocking the spread of COVID-19 as the masks can filter and block virus particles in the air [18], which is also recognized and advocated by the government. In order to ensure the effective implementation of above measures, it is necessary to develop an automatic detection method to detect whether individuals in public wear masks in the right way, which also facilitates intelligent prevention and control of the epidemic. Besides, people are also suggested to keep distances in public places and take routine tem- perature screening [1], all of which are convenient and effective methods for reducing transmission of COVID-19. In the past few years, with the rapid development of deep learning, computer vision (CV) has been widely used in facial expression recogni- tion [12,13,41], disease diagnosis [42] and so on. At the same time, CV techniques have boomed and many classical object detection methods have emerged, including one-stage, two-stage and other anchor-free al- gorithms like Faster RCNN [26], RetinaNet [20], YOLOv3 [29], YOLOv4 [3], CornerNet [17], FCOS [34], etc. Existing algorithms for face and mask detection can be divided into two classes, where one is real-time detection method that pursues a fast inference speed [2,15,24]; the other is the high-performance detection method which prioritizes the accuracy. So far many in-depth researches have been carried out on face mask detection [24,16,32]. In [2], inspired by MobileNet and SSD, a lightweight model BlazeFace is proposed, which can reach sub- Image and Vision Computing 117 (2022) 104341 ☆ This work was supported in part by the Natural Science Foundation of China under Grant 62073271, in part by the UK-China Industry Academia Partnership Programme under Grant UK-CIAPP-276, in part by the International Science and Technology Cooperation Project of Fujian Province of China under Grant 2019I0003, in part by the Fundamental Research Funds for the Central Universities under Grant 20720190009, in part by The Open Fund of Engineering Research Center of Big Data Application in Private Health Medicine under Grant KF2020002, and in part by the Fujian Key Laboratory of Automotive Electronics and Electric Drive (Fujian University of Technology) under Grant KF-X19002. ⁎ Corresponding author. E-mail address: zny@xmu.edu.cn (N. Zeng). https://doi.org/10.1016/j.imavis.2021.104341 0262-8856/© 2021 Elsevier B.V. All rights reserved. Contents lists available at ScienceDirect Image and Vision Computing journal homepage: www.elsevier.com/locate/imavis
  • 2. millisecond face detection speed and is suitable for deployment on mo- bile devices. [15] proposes a deep regression-based user image detector (DRUID), which makes segmentation-based deep convolution algo- rithms available in the mobile domain and also has great advantages of saving time. A face detection model that integrates Squeeze- Excitation Network and ResNet is designed in [44], where a regression network is applied to track and localize faces in video sequences. As for mask detection, [32] applies YOLOv3 and Faster RCNN to detect whether a mask is worn or not, and [24] proposes a transfer learning method based on ResNet50 and YOLOv2 for the detection of medical masks. In addition, RCNN series algorithms are adopted to realize a multi-task problem in [16], including social distance calculation and mask detection. According to above discussions, it is found that few of current methods have highlighted the unique characteristics of face mask de- tection, and they generally handle this problem as a conventional object detection task. However, in practice not only detection of both mask and face is necessary, but localization of the mask in face is also of vital sig- nificance, which can estimate whether the mask is worn in the right or wrong way. In order to meet the special demand, in this paper, a face mask wearing oriented detection scheme is proposed along with the corresponding algorithm, as shown in Fig. 1. In particular, three scenar- ios have been defined, including not wearing masks and wearing mask in the right/wrong way. Before splitting the data, preprocessing opera- tions are performed to eliminate dirty data and adjust image size. After- wards, anchor clustering is performed to select suitable anchor scale of the training data, which benefits model training by introducing in spe- cific designed prior knowledge for different tasks; and then other data augmentation operations are applied to enrich the samples. By means of sufficient training epochs, the best model and corresponding weight parameters will be saved and evaluated. Finally, once the precision reaches a predefined criterion, the model will be deployed on the termi- nal device to realize practical application. It is remarkable that when the video streams or pictures are fed into terminal devices, in-time detec- tion results can be outputted and visualized through the designed algo- rithm. Consequently, intelligent monitoring of mask wearing can be accomplished by deploying relevant devices in places with high popula- tion flow-rate such as airports and supermarkets, which can effectively improve the efficiency of COVID-19 epidemic prevention and control. The major contributions of this paper are summarized as follows: 1) A general face mask detection framework is proposed, which is mask wearing oriented and can be easily extended for secondary develop- ment and improvement. 2) A novel face mask detection algorithm FMD-Yolo is proposed, where the feature extractor, feature fusion operation and post-processing techniques are all specifically designed. 3) Experimental results on two public datasets have fully demonstrated the superiority of FMD-Yolo, which outperforms several state-of- the-art methods. Moreover, the high generalization ability enables application in other situations. The remainder of this paper is organized as follows. Some preliminaries of related work are provided in Section 2 and in Section 3, and detailed implementations of FMD-Yolo are elaborated, including data prepro- cessing, analytic methods, overall structure and post-processing tech- niques. Benchmark evaluation results and parallel comparisons with comprehensive discussions are presented in Section 4, and in Section 5, conclusions are drawn. 2. Related work For a comprehensive and in-depth understanding of proposed face mask detection algorithm, some background information are presented in this section, including the feature extractor, feature fusion methods and YoloV3 framework. 2.1. Feature extractor based on residual structure In the field of computer vision (CV), one important research is to ef- fectively extract features from multi-scale information. With the rapid development of deep learning techniques, feature extraction has gradu- ally transformed from manual design to automatic construct by algo- rithms [14]. In object detection tasks, corresponding algorithm is known as the backbone, and up to now plenty of networks have emerged such as VGG [30], ResNet [8], DenseNet [9], DLA (Deep Layer Aggregation) [40]. One practical strategy applied in those methods is to enrich connections among feature maps, like stacking layers and set- ting skip connections. It should be pointed out that ResNet is one of the most widely ap- plied backbone which designs a residual block to overcome the degra- dation problem associated with training a deep network. Res2Net [5] constructs a hierarchical progressive residual connection, which is con- ducive to improving the multi-scale feature expression ability of the network at a finer level. Major innovation of Res2Net is to replace 3 × 3 convolution in the bottleneck of ResNet with hierarchical cascaded feature group convolution, and this modification is shown in Fig. 2. The design of hierarchical convolution facilitates enlarging the receptive Fig. 1. General flowchart of face mask detection algorithm (FMDA) framework. P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 2
  • 3. field within the bottleneck block, where more refined multi-branches can achieve better capture of the objects. To be specific, Res2Net inte- grates s groups of convolution kernel and has s w channels in total, where w represents the width of network. With larger value of s, more abundant receptive fields will be allowed to learn multi-scale fea- tures. In this paper, s and w are set to 4 and 26 respectively. As shown in Fig. 2, bottleneck in Res2Net is divided into s subsets after the 1 × 1 convolution, which are defined as Xi; i ∈ f1; 2; ⋯; sg. Each subset has the same size as the input, but the number of channels becomes 1 s. Except for X1, each sub-feature is performed a 3 × 3 convo- lution operation Cið Þ, i ∈ 2, 3, ⋯, s f g and lateral connections are adopted for enhancing interaction. The hierarchical residual connection enables the changes of receptive fields at a finer grained level to capture details and global characteristics. Notice the 3 × 3 convolution in each group can receive information from the left branch, thus the final output can contain multi-scope features, which is denoted as Yi, i ∈ 1, 2, ⋯, s f g and can be obtained as: Yi ¼ Xi i ¼ 1; Ci Xi ð Þ i ¼ 2; Ci Xi þ Yi−1 ð Þ 2 i ⩽ s: 8 : ð1Þ 2.2. Feature pyramid network In early object detection algorithms, output of the backbone network will be directly fed into the subsequent component, which is the so- called detection head. However, these output feature maps usually can- not make full use of information to characterize objects at multiple scales, and simultaneously the resolution is too small compared with the original input. To overcome such limitation and enhance detection accuracy and reliability, feature fusion methods come to another re- search hot-spot, which can merge multi-scale features at different levels. In [21], the authors propose a single shot detector (SSD), which directly uses feature maps of different stages to achieve multi-scale detection, but there is no interconnection among individual feature maps. Feature pyramid network (FPN) [19] applies lateral concate- nation and up-sampling operation to effectively merge high-level se- mantic features in top layers and high-resolution information in bottom layers. It is remarkable that many other feature fusion methods are designed based on FPN, like path aggregation network (PANet) [23], which adds an extra bottom-up path. This simple bidi- rectional fusion is proven helpful as information in bottom layer can be delivered upwards, and afterwards, more complicated bidirec- tional fusion methods have emerged. For example, the bi- directional feature pyramid network (BiFPN) [33] adopts cross- scale connection to the PANet. By means of multiple repetitions, in- formation of each stage is able to be sufficiently integrated, even the fused feature maps can be reused. To sum up, the feature map fusion approaches have developed from simple manner like top-down and bidirectional fusion to more complex ones, as is shown in Fig. 3. Ciði ∈ NÞ denotes the feature maps generated at each stage of the backbone network and Piði ∈ NÞ denotes obtained output after multi-scale feature fusion. Define the fusion operation as fusion ð Þ, and the relationship between output and input can be depicted as: Pi; Piþ1; ⋯; Piþk ¼ fusion Ci; Ciþ1; ⋯; Ciþk ð Þ; i; k ∈ N ð Þ ð2Þ 2.3. You only look once (YOLO) YOLO (You only look once) series algorithms have already experi- enced four versions [27–29,3], which are deemed as magnificent masterpieces in the field of object detection and have been widely applied into various scenarios, including road marking detection [39], license plate recognition [10] and agricultural production [37], etc. In particular, YoloV3 [29] is recognized as an important break- through, which makes a trade-off between detection precision and speed. Structure of YoloV3 is illustrated in Fig. 4, where backbone network adopts the DarkNet53 with fully connected layer removed, and primary components include convolution blocks and stacked re- sidual units (Res Unit), which are denoted as CBR (Conv BN Relu) and ResN respectively. Notice each CBR block contains convolution, batch normalization and leaky relu activation; N in ResN is the number of stacked residual units. As shown in Fig. 4, another unique characteristic of YoloV3 is the im- plementation of multi-scale prediction, i.e., detection at 32×, 16× and 8× down-sampling for the input images, which generates feature maps with size of 13 × 13, 26 × 26 and 52 × 52 for detection of large, medium and small size objects, respectively. It is also remarkable that feature fusion manner in YoloV3 is similar to the idea of FPN, which takes full advantages of high-level semantic features and low-level geo- metric information so that the overall framework can have strong fea- ture presentation ability and consequently, more accurate detection will be achieved. Fig. 2. Comparison of conventional bottleneck structure with Res2Net module. Fig. 3. Four types of feature fusion methods. P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 3
  • 4. 3. Detailed implementations of proposed FMD-Yolo In this section, the overall framework of the proposed FMD-Yolo method is elaborated, including data preprocessing, model implemen- tation and post-processing method. 3.1. Data pre-processing and analysis methods Fig. 1 presents the procedure of applied data preprocessing, which contains sample and batch transformations. At first, Mixup [43] method is applied along with other traditional data augmentation operations like random color distort, expansion and flip, etc. Traditional methods enhance the robustness of model training by introducing noises and perturbations in a single image, but Mixup helps to enhance the gener- alization ability of the model and improve the stability of training pro- cess by linearly interpolating two random samples and corresponding labels in training set to generate new virtual samples, which can be depicted as: ~ x ¼ λxi þ 1−λ ð Þxj ~ y ¼ λyi þ 1−λ ð Þyj ð3Þ where xi and xj represent two different types of the input, yi and yj are corresponding one-hot labels, and λ is mixing coefficient calculated by beta distribution λ∼Beta α, α ð Þ, α ∈ 0, ∞ ð Þ (α and β both take 1.5 in this paper). Secondly, input images are resized to 640 × 640 and then normal- ized by Normalize operation. To further match training data with FDM-Yolo model, k-means clustering and mutation operation in genetic algorithm are used to select the best combination of prior anchor sizes, i.e., the “Anchor Cluster” in Fig. 1 [4]. At last, the size of anchors is further optimized and determined by mutation operation in genetic algorithm. It is noteworthy that an appropriate anchor size setting benefits the model convergence and useful prior knowledge can be learned, which can speed up the model training process and obtain more matching re- sults with the actual values, and detailed implementation flowchart of anchor cluster is shown in Fig. 5. As shown in Fig. 5, firstly K initial clustering centers are selected where the K is set to 9 in this paper. Then each sample point is assigned to the nearest neighbor clusters, which is measured by the IoU(Intersec- tion over Union)-based distance between clustering center and the sample point, as provided in Eq. (4). Until the cluster centers are fixed and no longer changed, k-means clustering is accomplished and genetic algorithm will be applied to evolve the anchor settings. To be specific, after parameter initialization (fitness function f, mutation probability mp, variation coefficient σ and generations gen), mutation operation is carried out to generate a new generation, which is then multiplied with previously obtained clustering centers to calculate the f. Original clustering center will be updated only when a larger fitness value is ob- tained, and after 1000 iterations the update is terminated with final an- chor size setting saved. D Xi ð Þ ¼ 1−IOU Xi, Ci ð Þ ð4Þ 3.2. Details of FMD-Yolo network structure Fig. 6 illustrates the framework and implementation details of pro- posed face mask detection model FMD-Yolo. There are three major components in designed deep network, which are Im-Res2Net-101, En-PAN and Classification + Regression. In particular, Im-Res2Net-101 Fig. 4. The architecture of YoloV3 network. Fig. 5. The anchor cluster algorithm flow used in this paper. P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 4
  • 5. (101 denotes the number of layers) is a modified Res2Net structure serving as the backbone of FMD-Yolo, which is responsible for extracting feature from the inputs and have three-stage outputs de- noted as C3, C4 and C5. En-PAN is a bidirectional fusion approach based on PANet to enhance the detection performance of FMD-Yolo network by further refining and fusing outputted information from backbone. Meanwhile, the loss function of FMD-Yolo is a modification on that of YoloV3, which will be introduced in next subsection. For a more com- prehensive understanding, Fig. 7 and Fig. 8 present the designed Im- Res2Net-101 and En-PAN architecture, respectively. Im-Res2Net-101 contains a total of five stages, and FMD-Yolo uti- lizes the output feature maps of the last three stages and passes them Fig. 6. Overall structure of proposed FMD-Yolo. Fig. 7. The structure of Im-Res2Net-101 backbone. P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 5
  • 6. into the subsequent En-PAN feature fusion module. In stage 1, by using three 3 × 3 convolutions, the learning ability is guaranteed with only a small number of parameters. Then in stage 2, in order to retain as much valid information in the feature map as possible, a 3 × 3 max pooling operation with a stride size of 2 is performed after which the output will enter an Im-Res2Net-a module for 3 times. From stage 3 to 5, Im-Res2Net-b module is used and repeated for 4, 23 and 3 times re- spectively. Nonlocal operation is introduced in stage 3 and 4, and de- formable convolution [45] is applied in stage 4 and 5. Since the size and shape of human faces and masks in real scenes are changeable, a fixed receptive field will lead to performance degradation. Nevertheless, deformable convolution allows modeling capability of geometric trans- formation by adding a two-dimensional offset values to conventional convolution operation, which enables sampling region to be deformed freely. The output feature maps of regular and deformable convolution at each position c0 are denoted as RC and DC respectively, as described in Eq. (5) and Eq. (6). RC c0 ð Þ ¼ X cn∈S w cn ð Þ x c0 þ cn ð Þ ð5Þ DC c0 ð Þ ¼ X cn∈S w cn ð Þ x c0 þ cn þ Δcn ð Þ ð6Þ where S refers to the sampling grid area, cn represents each position traversed on S, and wð Þ is the weight function; Δ is the offset value that displaces each point on area S by using offset Δcnjn ¼ 1, 2, ⋯, S j j f g. In addition, to reduce loss of information after convolution, the right branch of Im-Res2Net-b applies a 2 × 2 average pooling with stride of 2 and a 1 × 1 convolution operation with stride of 1, rather than a 1 × 1 convolution with stride of 2. The Im-Res2Net-b module is followed by a Spacetime Non-local block, which is essentially based on a self- attention mechanism that makes the processed information not limited to a local region [35], and thus captures the remote dependencies directly by computing the interaction between any two positions. “Spacetime” represents space-based and time-based information, that is, the attention mechanism can also play its role in video sequences. Non-local operation directly fuses the global information instead of sim- ply stacking multiple convolutional layers and can generate richer se- mantic information, which is equivalent to constructing a convolution kernel with the same size of corresponding input feature map. Follow- ing equation describes the work principle of Non-local operation. yi ¼ 1 C x ð Þ ∑ ∀j f xi, xj g xj ð7Þ where x and y refer to the input and output feature maps, respectively; i and j represent the current and global position, which are both indices of space and time. f(·) is the calculation of similarity between i and j, where the Embedded Gaussian function is used; g(·) calculates the lin- ear transformation representation of the feature map at j position, which can be realized by 1 × 1 convolution; C(·) is a response factor, and the final output y is obtained after standardization. Fig. 8 illustrates the structure and implementation of En-PAN, which adopts the bi-directional fusion of PANet and makes full advantages of multi-scale information based on the Yolo Residual module to enhance feature presentation ability. Except for a top-down path, a bottom-up path is added and connected horizontally to fully integrate the overall target information of both the high-level feature maps and the low- level texture information. The top-down and bottom-up paths go Fig. 8. The implementation of En-PAN structure. P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 6
  • 7. through a total of five Yolo Residual modules to thoroughly improve the robustness, and to make those feature maps match in size. Upsample block and 3 × 3 convolution operation with stride of 2 have been em- ployed in the top-down and bottom-up path, respectively. Based on aforementioned three-stage outputs of C3, C4 and C5 of Im-Res2Net- 101, output of En-PAN can be obtained as: P3 ¼ YR F3 ð Þ, F3 ¼ C3⊕Up F4 ð Þ; P4 ¼ YR Co P3 ð Þ⊕ F4 ð Þ ð Þ, F4 ¼ YR C4⊕Up F5 ð Þ ð Þ P5 ¼ YR Co P4 ð Þ⊕ F5 ð Þ ð Þ, F5 ¼ YR C5 ð Þ 8 : ð8Þ where YR and Up denote the operations associated with the Yolo Residual block and the UpSample block respectively, and Co represents a 3 × 3 convolution with stride = 2. Yolo Residual Block also has two branches, where the left side in- cludes one 1 × 1 convolution and three Conv Blocks, while the right side has one 1 × 1 convolution and SE (Squeeze and Excitation) Layer [11]. Eventually output of both branches will be concatenated and after a 1 × 1 convolution, final output is obtained. It should be pointed out that applied Conv Block introduces Coordinate Convolution (Coord Conv) [22], Spatial Pyramid Pooling (SPP) [7] and DropBlock [6] optimi- zation strategies. In particular, Coord Conv adds two channels to the input feature maps to represent the horizontal and vertical coordinates, which makes the convolution spatially aware. Since the face mask de- tection task in this paper is to output the bounding boxes by observing the image pixels in Cartesian coordinate space, the application of Coord Conv can solve the defects caused by ordinary convolution during spa- tial transformation so as to improve the space perception ability. DropBlock is a regularization method that can be deemed as a variant of Dropout, which not only drops one unit in feature map, but also neighbor regions together to prevent over-fitting problem and guaran- tee the efficiency of regularization work, so it is more suitable for the face mask detection task of FMD-Yolo. SPP further aggregates the fea- tures obtained from the previous convolutional layers and can adapt to changes in image target size and aspect ratio, which is proven a multi-resolution strategy and makes the deformation more robust against the target object. In this paper, SPP module is constructed by using three parallel max-pooling layers with sizes of 5, 9 and 13 respec- tively. Then inputs and outputs of the three layers are concatenated. Moreover, SE Layer of Yolo Residual Block enables adaptive calibration of feature channels by explicitly modeling the interdependence therein through squeeze and excitation operations, allowing the network to se- lectively enhance beneficial feature channels and suppress useless ones by using global information, which is realized by one average pooling and two fully-connected layers. 3.3. Loss function and post-processing methods On the basis of cross-entropy loss function of original YoloV3 net- work, proposed FMD-Yolo algorithm further introduces in a localization loss, including IoU loss and IoU-aware loss. IoU measures the differences between predicted bounding box and corresponding ground-truth, which reflects the localization accuracy and is helpful for keeping model performance in complex environments such as overlapping mul- tiple targets. In this paper it is calculated as: Liou ¼ 1−iou2 wi ð9Þ where wi is an introduced weight and takes value of 2.5. IoU-aware loss further adopts an additional IoU-aware prediction branch for better measure the localization accuracy [38], which can be obtained by: Liou−a ¼ −iou log σ o ð Þ ð Þ− 1−iou ð Þ log 1−σ o ð Þ ð Þ ð10Þ where iou denotes the intersection over union between the predicted boxes and ground-truth, o is the output of IoU-aware branch, and σ is the sigmoid activation function. So far, a localization loss is defined as Lloc = Liou + Liou−a, and final loss function Ltotal applied in FMD-Yolo con- sists of coordinates loss Lcod, objectness loss Lobj, classification loss Lcls and localization loss Lloc: Ltotal ¼ Lcod þ Lobj þ Lcls þ Lloc ð11Þ Additionally, it is remarkable that in object detection area, non- maximum suppression (NMS) technique is able to remove redundant boxes by extracting target detection boxes with high confidence and suppressing others to finally obtain the true optimal prediction results. Applying a suitable NMS method not only facilitates improving the effi- ciency, but also enhances the prediction accuracy. Hence, in the infer- ence phase of FMD-Yolo, Matrix NMS [36] is selected as the post- processing method, which uses the sorted upper triangular IoU matrix for parallelization operation to improve the detection accuracy, and can effectively prevent overlapping similar object prediction boxes from conflicting with each other. 4. Experiments and discussions In this section, proposed FMD-Yolo is evaluated on two public human face mask detection datasets, and the experimental results are reported and discussed in detail. At first, a brief introduction of our ex- perimental preparation is given, including information of applied data- bases, evaluation metircs and parameterization. Then, FMD-Yolo algorithm is comprehensively analyzed and compared with other state-of-the-art deep learning methods. 4.1. Experiment environment and settings 4.1.1. Applied datasets In this paper, two face mask datasets have been selected to veriry the effectiveness of FMD-Yolo, which are publicly available at website https://www.kaggle.com/andrewmvd/face-mask-detection and https://aistudio.baidu.com/aistudio/datasetdetail/24982. For con- venience, they are denoted as MD-2 and MD-3, respectively. In particu- lar, MD-2 contains two categories face (a1) and face _ mask (a2) referring to faces and faces with masks, respectively; whereas MD-3 in- cludes three categories, which are without _ mask (b1), with _ mask (b2) and mask _ weared _ incorrect (b3), where the last one indicates the mask is worn incorrectly. Table 1 provides the category distrubution of each database and the partition for network training. 4.1.2. Evaluation metrics As face mask detection is essentially a classification and localization task, typical indicators have been used for evaluation including true pos- itive (TP), true negative (TN), false positive (FP) and false negative (FN), based on which Precision and Recall are defined as follows: Precision ¼ TP TP þ FP ð12Þ Table 1 Datasets information. Numbers MD-2 MD-3 Training set 6362 683 Validation set 1590 170 Category a1 / b1 of training set 9937 567 Category a2 / b2 of training set 3212 2388 Category b3 of training set – 107 Mean boxes per image 2.067 4.490 P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 7
  • 8. Recall ¼ TP TP þ FN ð13Þ Furthermore, intersection over union (IoU) is adopted for evaluation which denotes the ratio of overlapping area between prediction boxes and corresponding ground-truth, and larger IoU value reflects a more precise localization so IoU = 1 will be the best case. Combining with IoU value, AP50 and AP75 is applied to report average precision at IoU = 0.5 and IoU = 0.75 level; similarly, AR50 means average recall at IoU = 0.5. mAP and mAR report the average of 10 precision and recall values at IoU varies from 0.5 to 0.95 with 0.05 interval, respectively. As for detailed performance on each category, indicators cAP50 and cAR50 have been used, and c is the abbreviation of category. In addition, P-R curves are plotted for each category with the horizontal coordinate of precision and the vertical coordinate of recall to further evaluate the comprehensive performance of the face mask detection models. 4.1.3. Experimental settings Table 2 presents the experimental settings of proposed FMD-Yolo al- gorithm on both MD-2 and MD-3 databases. In order to further demon- strate the superiority of FMD-Yolo, it has been compared with other state-of-the-art object detection algorithms including Faster RCNN baseline [26], Faster RCNN with FPN [19], YoloV3 [29], YoloV4 [3], RetinaNet [20], FCOS [34], EfficientDet [33] and HRNet [31]. For a fair comparison, training and validation datasets maintain the same and the best convergent model of each algorithm are selected, and all exper- iments are run under the deep learning framework PaddlePaddle 2.1.2 with a Tesla V100 GPU (32GB). In other words, the eight comparison al- gorithms and FMD Yolo have the same hardware configuration and soft- ware environment. 4.2. Experimental results and discussions 4.2.1. Overall results and discussions Table 3 reports the experimental results on MD-2 database, where proposed FMD-Yolo and other eight advanced detection algorithms are compared on five evaluation metrics of mAP, AP50, AP75, mAR and AR50. As can be seen, FMD-Yolo algorithm performs best on all indica- tors with mAP, AP50, AP75, mAR and AR50 values of 66.4%, 92.0%, 80.0%, 77.4% and 95.5%. Compared with corresponding model that ranks second on each indicator, FMD-Yolo has made an improvement of 4.6%, 1.8%, 5.5%, 10.0% and 2.3% respectively, which demonstrates the superiority of proposed FMD-Yolo in this two-category detection problem. According to Eq. (12) and Eq. (13), precision measures the accuracy of model prediction results and recall reflects the ability of identifying TP instances in all positive samples. Notice that at a larger IoU value, im- provement brought by FMD-Yolo on AP75 is even 3.7% higher than that on AP50, which indicates that FMD-Yolo satisfies the higher localization accuracy with a more significant performance advantage. It is mainly because IoU and IoU-aware loss have been adopted in the training pro- cess of FMD-Yolo that prediction results are more precise. In terms of mAR which exceeds the second one by 10%, it can be inferred that pre- processing phase of FMD-Yolo model have contributed the most, in- cluding various data augmentation methods and the Anchor Cluster algorithm combining k-means and genetic algorithms, which matches the best size settings of anchors. In addition, comparison with classical YoloV3 and YoloV4 models has demonstrated the effectiveness of de- signed feature extractor and fusion structure in FMD-Yolo. In Table 4, experimental results on MD-3 database have been re- ported. Notice that compared with MD-2, MD-3 has an extra category which leads to more difficulty in recognition. Therefore, some algo- rithms such as EfficientDet have presented performance degradation while working on MD-3 than that on MD-2, as can be seen from Table 3 and Table 4. However, proposed FMD-Yolo still performs best, which improves mAP, AP50, AP75, mAR and AR50 by 3.9%, 3.8%, 3.9%, 7.1%, and 0.2% respectively, compared with corresponding model that ranks second. It can be concluded that FMD-Yolo algorithm has so strong robustness that it can present overwhelming advantages in both two-category and three-category face mask detection tasks, which is suitable for application in more complex and variable real- world scenarios. In order to further intuitively analyze and compare the performance of each algorithm in face mask detection task, Fig. 9 shows the overall P- R curves for each algorithm on MD-2 and MD-3 datasets where IoU = 0.5 is selected as the threshold to divide positive and negative samples. For any two models, if the P-R curve of one is completely wrapped by that of another, the latter is deemed better. In addition, the superiority of individual models can be estimated based on the area enclosed by P-R curve and coordinate axis or based on the break even point (BEP). The so-called BEP is the coordinate value of the P-R curve of each model when the precision and recall are equal, which is the intersection of P-R curve and the precision = recall line. As illustrated in Fig. 9, both area under P-R curve and BEP of FMD-Yolo method are significantly bet- ter than those of other algorithms regardless of the database, which fur- ther indicates that FMD-Yolo does have outstanding performance in face mask detection applications. 4.2.2. Category-wise results and analysis In this subsection, more in-depth category-wise discussions are pre- sented. Table 5 provides the evaluation results on MD-2 database, Table 2 Datasets information. Parameters MD-2 MD-3 Max Iterations 120,000 150,000 Base Learning Rate 0.000625 PiecewiseDecay Iters 110,000 130,000 Warmup Steps 4000 Optimizer SGD with Momentum (factor = 0.9) Regularizer L2 (factor = 0.0005) Train Batch Size 6 Table 3 Performance comparison of FMD-Yolo and other eight detection algorithms on MD-2 dataset. Methods Evaluation metrics mAP AP50 AP75 mAR AR50 Faster RCNN baseline 0.565 0.869 0.668 0.623 0.895 Faster RCNN with FPN 0.597 0.886 0.713 0.655 0.900 Yolo V3 0.574 0.888 0.659 0.670 0.929 Yolo V4 0.599 0.899 0.718 0.661 0.920 RetinaNet 0.616 0.887 0.726 0.664 0.909 FCOS 0.605 0.894 0.710 0.674 0.932 EfficientDet 0.613 0.870 0.726 0.665 0.910 HRNet 0.618 0.902 0.745 0.671 0.916 FMD-Yolo (ours) 0.664 0.920 0.800 0.774 0.955 Table 4 Performance comparison of FMD-Yolo and other eight detection algorithms on MD-3 dataset. Methods Evaluation metrics mAP AP50 AP75 mAR AR50 Faster RCNN baseline 0.521 0.821 0.604 0.601 0.889 Faster RCNN with FPN 0.536 0.846 0.551 0.615 0.907 Yolo V3 0.507 0.824 0.585 0.609 0.927 Yolo V4 0.525 0.843 0.591 0.587 0.897 RetinaNet 0.489 0.792 0.536 0.579 0.875 FCOS 0.521 0.796 0.558 0.628 0.916 EfficientDet 0.454 0.699 0.518 0.569 0.841 HRNet 0.512 0.802 0.556 0.571 0.845 FMD-Yolo (ours) 0.575 0.884 0.643 0.699 0.929 P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 8
  • 9. where precision cAP50 and average recall cAR50 at IoU = 0.5 of catego- ries face(a1) and face _ mask(a2) are reported. From the results, it is found that all methods have better performence on detecting a2 cate- gory. In particular, FMD-Yolo presents a 10.8% higher cAP50 and 8.7% better cAR50 value on a2 than those of a1. But even regarding to the a1 class, FMD-Yolo also performs the best results, which outperforms the second one by 3.2% and 4.1% on cAP50 and cAR50, respectively. Similarly, Table 6 reports the category-wise evaluation results on MD-3 dataset. Obviously, the category b3 is the most difficult one for ac- curate detection. FMD-Yolo takes the first place in terms of precision on each category, specifically on b3; the cAP50 value is even 3.3% higher than the second one, which demonstrates the superority of FMD-Yolo in complicated detection work. As for cAR50, FMD-Yolo ranks fourth on category b3 and best on the others. Slightly loss of cAR50 value on b3 may be resulted from smaller number of this category in the valida- tion set, and the fact that FMD-Yolo model pays more attention to higher quality detection with larger IoU. But still, FMD-Yolo presents a satisfactory performance on this three-category detection problem. Furthermore, as shown in Fig. 10, the P-R curves of FMD-Yolo are il- lustrated on categories a1, a2 in MD-2 dataset, and categories b1, b2, b3 in MD-3 dataset. Notice that the values of BEP (break even point) and the area under the curves are satisfied with a2 a1 and b2 b1 b3, which verifies the reasonability of above discussions. It is also worth noting that both category a2 and b2 belong to the face wearing mask type, where FMD-Yolo performs well with cAP50 of 97.4% and 93.9% re- spectively, and this indicates that in real-world application FMD-Yolo is capable of accurately identifying people who have worn a mask. 5. Conclusion In this paper, an efficient automatic face mask recognition and detec- tion framework FMD-Yolo and corresponding algorithm are proposed. In particular, a modified Res2Net structure Im-Res2Net-101 serves as the backbone to extract features with rich receptive fields from the input. Subsequently there follows a feature fusion component En-PAN, which is a novel path aggregation network and primarily consists of Fig. 9. Overall P-R curves for each model on MD-2 and MD-3 datasets (IoU=0.5). Table 5 Performance comparison on each category of the MD-2 dataset. Methods face(a1) face _ m ask(a2) cAP50 cAR50 cAP50 cAR50 Faster RCNN baseline 0.776 0.811 0.962 0.979 Faster RCNN with FPN 0.811 0.824 0.961 0.975 Yolo V3 0.804 0.863 0.971 0.994 Yolo V4 0.829 0.855 0.969 0.985 RetinaNet 0.796 0.825 0.978 0.993 FCOS 0.819 0.870 0.969 0.994 EfficientDet 0.762 0.826 0.979 0.993 HRNet 0.834 0.852 0.971 0.980 FMD-Yolo (ours) 0.866 0.911 0.974 0.998 Table 6 Performance comparison on each category of the MD-3 dataset. Methods Category b1 / b2 / b3 cAP50 cAR50 Faster RCNN baseline 0.817 / 0.917 / 0.730 0.860 / 0.930 / 0.875 Faster RCNN with FPN 0.838 / 0.913 / 0.787 0.860 / 0.922 / 0.938 Yolo V3 0.864 / 0.928 / 0.681 0.900 / 0.943 / 0.938 Yolo V4 0.840 / 0.903 / 0.785 0.887 / 0.928 / 0.875 RetinaNet 0.774 / 0.870 / 0.732 0.800 / 0.886 / 0.938 FCOS 0.842 / 0.917 / 0.627 0.933 / 0.940 / 0.875 EfficientDet 0.696 / 0.870 / 0.530 0.793 / 0.917 / 0.812 HRNet 0.825 / 0.915 / 0.666 0.860 / 0.925 / 0.750 FMD-Yolo (ours) 0.892 / 0.939 / 0.820 0.947 / 0.964/ 0.875 Fig. 10. P-R curves of FMD-Yolo on each category (IoU=0.5). P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 9
  • 10. Yolo Residual Block, SPP, Coord Conv, SE Layer blocks. In En-PAN, high- level semantic information and low-level details can be sufficiently merged so that highly distinctive feature with strong robustness can be constructed. In addition, the localization loss function including IoU and IoU-aware loss is applied for training FMD-Yolo, and parallel com- puting method Matrix NMS is applied in the post-processing stage, which greatly enhances the model inference efficiency. Benchmark evaluations on two publicly available datasets and comparison with other eight state-of-the-art object detection algorithms have demon- strated the effectiveness and superiority of FMD-Yolo. In future work, we will further enhance the generalization ability of FMD-Yolo so as to enable applications in various real-world detection scenarios, which can be achieved by designing a more efficient feature fusion model with channel and spatial self-attention mechanism. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influ- ence the work reported in this paper. References [1] A. Bassi, B.M. Henry, L. Pighi, L. Leone, G. Lippi, Evaluation of indoor hospital acclima- tization of body temperature before COVID-19 fever screening, J. Hosp. Infect. 112 (2021) 127–128. [2] V. Bazarevsky, Y. Kartynnik, A. Vakunov, K. Raveendran, M. Grundmann, Blazeface: Sub-Millisecond Neural Face Detection on Mobile Gpus, 2019 arXiv preprint, arXiv:1907.05047. [3] A. Bochkovskiy, C. Wang, H. Liao, YOLOv4: optimal speed and accuracy of object de- tection, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [4] C. Chinrungrueng, C. Sequin, Optimal adaptive k-means algorithm with dynamic ad- justment of learning rate, IJCNN-91-Seattle Int. Jt. Conf. Neural Netw. 1 (1991) 855–862. [5] S. Gao, M. Cheng, K. Zhao, X. Zhang, M. Yang, P. Torr, Res2Net: a new multi-scale backbone architecture, IEEE Trans. Pattern Anal. Mach. Intell. 43 (2019) 652–662. [6] G. Ghiasi, T. Lin, Q. Le, DropBlock: a regularization method for convolutional net- works, Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS) 2018, pp. 10750–10760. [7] K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional net- works for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell. 37 (9) (2015) 1904–1916. [8] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4, 2016, pp. 770–778. [9] G. Huang, Z. Liu, L. van der Maaten, K. Weinberger, Densely connected convolutional networks, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp. 2261–2269. [10] R. Hendry, Chen, automatic license plate recognition via sliding-window darknet- YOLO deep learning, Image Vis. Comput. 87 (2019) 47–56. [11] J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks, IEEE Trans. Pattern Anal. Mach. Intell. 42 (8) (2020) 2011–2023. [12] D. Jain, Z. Zhang, K. Huang, Random walk-based feature learning for micro- expression recognition, Pattern Recogn. Lett. 115 (2018) 92–100. [13] D. Jain, Z. Zhang, K. Huang, Multi angle optimal pattern-based deep learning for au- tomatic facial expression recognition, Pattern Recogn. Lett. 139 (2020) 157–165. [14] A. Krizhevsky, I. Sutskever, G. Hinton, ImageNet classification with deep convolutional neural networks, Neural Inf. Process. Syst. (2012) 1097–1105. [15] U. Mahbub, S. Sarkar, R. Chellappa, Partial face detection in the mobile domain, Image Vis. Comput. 82 (2019) 1–17. [16] S. Meivel, K. Devi, S. Maheswari, J. Menaka, Real time data analysis of face mask de- tection and social distance measurement using matlab, Mater. Today Proc. (2021). [17] H. Law, J. Deng, CornerNet: detecting objects as paired keypoints, Proceedings of the European Conference on Computer Vision (ECCV) 2018, pp. 734–750. [18] M. Liao, H. Liu, X. Wang, X. Hu, Y. Huang, X. Liu, K. Brenan, J. Mecha, M. Nirmalan, J. Lu, A technical review of face mask wearing in preventing respiratory COVID-19 transmission, Curr. Opin. Colloid Interface Sci. 52 (2021). [19] T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid net- works for object detection, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp. 936–944. [20] T. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection, IEEE Trans. Pattern Anal. Mach. Intell. 42 (2) (2020) 318–327. [21] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, A. CBerg, SSD: single shot multibox detector, Proceedings of the European Conference on Computer Vision (ECCV) 2016, pp. 21–37. [22] R. Liu, J. Lehman, P. Molino, F. Such, E. Frank, A. Sergeev, J. Yosinski, An intriguing failing of convolutional neural networks and the coordconv solution, Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS) 2018, pp. 9628–9639. [23] S. Liu, L. Qi, H. Qin, J. Shi, J. Jia, Path aggregation network for instance segmentation, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018, pp. 8759–8768. [24] M. Loey, G. Manogaran, M. Taha, N. Khalifad, Fighting against COVID-19: a novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask de- tection, Sustain. Citi Soc. 65 (2021). [25] A. Rahmani, S. Mirmahaleh, Coronavirus disease (COVID-19) prevention and treat- ment methods and effective parameters: a systematic literature review, Sustain. Citi. Soc. 64 (2020). [26] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell. 39 (6) (2017) 1137–1149. [27] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real-time object detection, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016, pp. 779–788. [28] J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp. 6517–6525. [29] J. Redmon, A. Farhadi, YOLOv3: An Incremental Improvement, 2018 arXiv: 1804.02767, [online] Available: https://arxiv.org/abs/1804.02767. [30] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, Int. Conf. Learn. Represent. 2015 (2015) 1–14. [31] K. Sun, B. Xiao, D. Liu, J. Wang, Deep high-resolution representation learning for human pose estimation, 2019 IEEE Conference on Computer Vision and Pattern Rec- ognition (CVPR) 2019, pp. 5693–5703. [32] S. Singh, U. Ahuja, M. Kumar, K. Kumar, M. Sachdeva, Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment, Multimedia Tools Appl. 80 (2021) 19753–19768. [33] M. Tan, R. Pang, Q. Le, EfficientDet: scalable and efficient object detection, 2020 IEEE/ CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020, pp. 10781–10790. [34] Z. Tian, C. Shen, H. Chen, T. He, FCOS: fully convolutional one-stage object detection, 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019, pp. 9626–9635. [35] X. Wang, R. Girshick, A. Gupta, K. He, Non-local neural networks, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018, pp. 7794–7803. [36] X. Wang, R. Zhang, K. Tao, Lei. Li, C. Shen, SOLOv2: dynamic and fast instance seg- mentation, Adv. Neural Inf. Process. Syst. 33 (2020) 17721–17732. [37] D. Wu, S. Lv, M. Jiang, H. Song, Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural envi- ronments, Comput. Electron. Agric. 178 (2020). [38] S. Wu, X. Li, X. Wang, IoU-aware single-stage object detector for accurate localiza- tion, Image Vis. Comput. 97 (2020). [39] X. Ye, D. Hong, H. Chen, P. Hsiao, L. Fu, A two-stage real-time YOLOv2-based road marking detector with lightweight spatial transformation-invariant classification, Image Vis. Comput. 102 (2020). [40] F. Yu, D. Wang, E. Shelhamer, T. Darrell, Deep layer aggregation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, pp. 2403–2412. [41] N. Zeng, H. Zhang, B. Song, W. Liu, Y. Li, A.M. Dobaie, Facial expression recognition via learning deep sparse autoencoders, Neurocomputing 273 (2018) 643–649. [42] N. Zeng, H. Li, Y. Peng, A New Deep Belief Network-based Multi-task Learning for Diagnosis of Alzheimers’ Disease, Neural Computing and Applications, 2021 https://doi.org/10.1007/s00521-021-06149-6. [43] H. Zhang, M. Cisse, Y. Dauphin, D. Lopez-Paz, Mixup: Beyond empirical risk minimi- zation, Int. Conf. Learn. Represent. (2018). [44] G. Zheng, Y. Xu, Efficient face detection and tracking in video sequences based on deep learning, Inf. Sci. 568 (2021) 265–285. [45] X. Zhu, H. Hu, S. Lin, J. Dai, Deformable convnets V2: more deformable, better results, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019, pp. 9300–9308. P. Wu, H. Li, N. Zeng et al. Image and Vision Computing 117 (2022) 104341 10