Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

PERFORMANCE VALIDATION

Table of Contents:

A detailed experimental analysis takes place on two databases: IIT and CUHK. The sample set of images is shown in Figure 9.4.

The qualitative analysis of the FRCNN-GAN model is shown in Figure 9.5. As depicted, the figure showed that the FRCNN-GAN model has generated the sketch image highly resembling to the input image.

Table 9.1 and Figures 9.6 and 9.7 analyze the FSS results of the FRCNN-GAN model in terms of PSNR and SSIM on the applied two datasets.

Figure 9.6 shows the PSNR analysis of the FRCNN-GAN model on the applied two datasets. On the applied CUHK dataset, the experimental values indicated that the MWF

TABLE 9.1 Results Analysis of the FRCNN-GAN with Existing Methods in Terms of PSNR and SSIM

Dataset

Measures

Methods

MRF

MWF

SRGS

SCDL

CNN

FRCNN-GAN

CUHK

PSNR

15.07

14.41

14.79

15.14

15.64

16.12

SSIM

0.58

0.59

0.58

0.59

0.59

0.62

HIT

PSNR

19.26

17.20

18.46

18.33

19.62

20.23

SSIM

0.54

0.57

0.59

0.58

0.61

0.65

and SRGS models have showed minimum PSNR values of 14.41 and 14.79 dB, respectively. At the same time, the MRF and SCDL models have resulted in slightly higher PSNR values of 15.07 and 15.14 dB, respectively Besides, the CNN model has shown somewhat better performance with the high PSNR value of 15.64 dB. Furthermore, the FRCNN-GAN model has resulted in a higher PSNR value of 16.12 dB. On the given HIT dataset, the experimental measures showcased that the MWF and SCDL frameworks have exhibited lower PSNR values of 17.20 and 18.33 dB, respectively. The SRGS and MRF approaches have shown better PSNR values of 18.46 and 19.26 dB, respectively. Also, the CNN method has shown considerable value with the high PSNR value of 19.62 dB. The FRCNN-GAN method has showcased to maximum PSNR value of 20.23 dB.

Figure 9.7 implies the SSIM analysis of the FRCNN-GAN method on the applied two datasets. On the provided CUHK dataset, the experimental scores pointed that the MRF and SRGS methodologies have showcased lower SSIM values of 0.58 and 0.58 dB, respectively Simultaneously, the MWF, CNN, and SCDL schemes have attained better and same SSIM value of 0.59 dB. Moreover, the FRCNN-GAN approach has provided maximum SSIM value of 0.62 dB. On the applied HIT dataset, the experimental values pointed that the MRF and MWF methodologies have pointed lower SSIM values of 0.54 and 0.57 dB, respectively Concurrently, the SRGS and SCDL frameworks have attained moderate SSIM values of 0.59 and 0.58 dB, respectively. Additionally the CNN technology has implied manageable performance with the best SSIM value of 0.61 dB. Also, the FRCNN-GAN model has provided greater SSIM value of 0.65 dB.

(a) Input image, (b) Viewed sketch, (c) Forensic image, (d) Sketch synthesis image

FIGURE 9.5 (a) Input image, (b) Viewed sketch, (c) Forensic image, (d) Sketch synthesis image.

PSNR analysis of FRCNN-GAN model on the applied two datasets

FIGURE 9.6 PSNR analysis of FRCNN-GAN model on the applied two datasets.

Table 9.2 and Figures 9.8 and 9.9 examine the accuracy analysis of the FRCNN-GAN approach on the applied two datasets. Figure 9.8 depicts the accuracy analysis of the FRCNN-GAN method on the applied two datasets. On the applied CUHK dataset, the experimental measures notified that the SCDL and MWF methodologies have exhibited lower accuracy values of 69.85% and 70.84%, respectively. Simultaneously, the MRF and

TABLE 9.2 Accuracy Analysis of the FRCNN-GAN with Existing Methods.

Methods

CUHK

HIT

Average

MRF

71.30

71.34

71.32

MWF

70.84

68.30

69.57

SRGS

72.45

72.40

72.43

SCDL

69.85

71.75

70.80

CNN

78.53

80.21

79.37

FRCNN-GAN

80.56

81.40

80.98

SRGS approaches have offered reasonable accuracy values of 71.30% and 72.45%, respectively. Furthermore, the CNN approach has exhibited moderate function with better accuracy value of 78.53%. Additionally, the FRCNN-GAN technique has concluded with maximum accuracy value of 80.56%. On the applied HIT dataset, the experimental values pointed out that the MRF and MWF models have depicted lower accuracy values of 68.30% and 71.34%, respectively Besides, the SCDL and SRGS methodologies have accomplished reasonable accuracy values of 71.75% and 72.40%, respectively Additionally the CNN approach has shown slightly better performance with maximum accuracy value of 80.21%. Moreover, the FRCNN-GAN approach has showcased a remarkable accuracy value of 81.40%.

Figure 9.9 displays the average analysis of the FRCNN-GAN approach on the applied two datasets. It states that the experimental values pointed out that the MWF and SCDL models have showcased least average values of 69.57% and 70.8%, respectively Concurrently, the

Average analysis of FRCNN-GAN model with existing methods

FIGURE 9.9 Average analysis of FRCNN-GAN model with existing methods.

MRF and SRGS models have concluded with better average values of 71.32% and 72.43%, respectively. Also, the CNN approach has depicted considerable function with the higher average value of 79.37%. Additionally, the FRCNN-GAN framework has resulted to a maximum average value of 80.98%.

CONCLUSION

This chapter has developed a new IoT and 5G-enabled Faster RCNN with GAN called FRCNN-GAN model for FSS. The proposed model initially involves the image capturing process using IoT devices connected to the OpenMV Cam M7 Smart Vision Camera. It captures the images and stores it in the memory. Then, the FRCNN-GAN model executes the face recognition process using Faster RCNN model that identifies the faces properly in the captured image. Then, the GAN-based FSS module is carried out to synthesize the recognized face and generate the face sketch. Finally, the generated face sketch and the sketches that exist in the forensic databases are compared and the most relevant image is identified. An extensive experimentation analysis is carried out on two databases: IIT dataset and CUHK dataset. The simulation outcome ensures the effective performance of the FRCNN-GAN model over the compared methods.

ACKNOWLEDGMENT

This research work was carried out with financial support of RUSA-Phase 2.0 grant sanctioned vide Letter No. F. 24-51/2014-U, Policy (TNMulti-Gen), Department of Education, Government of India, 9.10.2018.

REFERENCES

  • 1. C.S. Brown, Investigating and prosecuting cyber crime: forensic dependencies and barriers to justice. Int. J. Cyber Criminol. 9 (1) (2015) 55.
  • 2. Y. Sun, X. Wang, X. Tang, Deep learning face representation from predicting 10,000 classes, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1891-1898.
  • 3. A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, in: Proceedings of the Advances in Neural Information Processing Systems, 2012, pp. 1097-1105.
  • 4. S. Klum, H. Han, A.K. Jain, B. Klare, Sketch based face recognition: forensic vs. composite sketches, in: 2013 International Conference on Biometrics (ICB), 2013, June, IEEE, pp. 1-8.
  • 5. N. Wang, X. Gao, J. Li, Random sampling for fast face sketch synthesis, Pattern Recognit. 76 (2018) 215-227.
  • 6. R Viola, M. Jones, Rapid object detection using a boosted cascade of simple features, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 1, 2001, pp. 511-518.
  • 7. S. Liao, A.K. Jain, S.Z. Li, A fast and accurate unconstrained face detector, IEEE Trans. Pattern Anal. Mach. Intell. 38 (2) (2016) 211-223.
  • 8. D.E. King, Dlib-ml: a machine learning toolkit, /. Mach. Learn. Res. 10 (2009) 1755-1758.
  • 9. X. Zhu, D. Ramanan, Face detection, pose estimation, and landmark localization in the wild, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2012, pp. 2879-2886.
  • 10. X. Shen, Z. Lin, J. Brandt, Y. Wu, Detecting and aligning faces by image retrieval, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3460-3467.
  • 11. D. Chen, S. Ren, Y. Wei, X. Cao, J. Sun, Joint cascade face detection and alignment, in: Proceedings of the European Conference on Computer Vision, Springer, 2014, pp. 109-122.
  • 12. S. Zhan, Q.Q. Tao, X.H. Li, Face detection using representation learning, Neurocomputing 187 (2016) 19-26.
  • 13. Y. Li, B. Sun, T. Wu, Y. Wang, Face detection with end-to-end integration of a ConvNet and a 3D model, in: European Conference on Computer Vision, Springer, Cham, 2016, pp. 420-436.
  • 14. H. Jiang, E. Learned-Miller, Face detection with the faster R-CNN, in: Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on IEEE, 2017, pp. 650-657.
  • 15. H. Qin, J. Yan, X. Li, X. Hu, Joint training of cascaded CNN for face detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3456-3465.
  • 16. D. Lu, Z. Chen, Q.J. Wu, X. Zhang, FCN based preprocessing for exemplar-based face sketch synthesis, Neurocomputing 365 (2019) 113-124.
  • 17. L. Ye, B. Zhang, M. Yang, W. Lian, Triple-translation GAN with multilayer sparse representation for face image synthesis. Neurocomputing 358 (2019) 294-308.
  • 18. L. Jiao, S. Zhang, L. Li, F. Liu, W. Ma, A modified convolutional neural network for face sketch synthesis, Pattern Recognit. 76 (2018) 125-136.
 
<<   CONTENTS   >>

Related topics