Home Computer Science
A detailed experimental analysis takes place on two databases: IIT and CUHK. The sample set of images is shown in Figure 9.4.
The qualitative analysis of the FRCNN-GAN model is shown in Figure 9.5. As depicted, the figure showed that the FRCNN-GAN model has generated the sketch image highly resembling to the input image.
Table 9.1 and Figures 9.6 and 9.7 analyze the FSS results of the FRCNN-GAN model in terms of PSNR and SSIM on the applied two datasets.
Figure 9.6 shows the PSNR analysis of the FRCNN-GAN model on the applied two datasets. On the applied CUHK dataset, the experimental values indicated that the MWF
TABLE 9.1 Results Analysis of the FRCNN-GAN with Existing Methods in Terms of PSNR and SSIM
and SRGS models have showed minimum PSNR values of 14.41 and 14.79 dB, respectively. At the same time, the MRF and SCDL models have resulted in slightly higher PSNR values of 15.07 and 15.14 dB, respectively Besides, the CNN model has shown somewhat better performance with the high PSNR value of 15.64 dB. Furthermore, the FRCNN-GAN model has resulted in a higher PSNR value of 16.12 dB. On the given HIT dataset, the experimental measures showcased that the MWF and SCDL frameworks have exhibited lower PSNR values of 17.20 and 18.33 dB, respectively. The SRGS and MRF approaches have shown better PSNR values of 18.46 and 19.26 dB, respectively. Also, the CNN method has shown considerable value with the high PSNR value of 19.62 dB. The FRCNN-GAN method has showcased to maximum PSNR value of 20.23 dB.
Figure 9.7 implies the SSIM analysis of the FRCNN-GAN method on the applied two datasets. On the provided CUHK dataset, the experimental scores pointed that the MRF and SRGS methodologies have showcased lower SSIM values of 0.58 and 0.58 dB, respectively Simultaneously, the MWF, CNN, and SCDL schemes have attained better and same SSIM value of 0.59 dB. Moreover, the FRCNN-GAN approach has provided maximum SSIM value of 0.62 dB. On the applied HIT dataset, the experimental values pointed that the MRF and MWF methodologies have pointed lower SSIM values of 0.54 and 0.57 dB, respectively Concurrently, the SRGS and SCDL frameworks have attained moderate SSIM values of 0.59 and 0.58 dB, respectively. Additionally the CNN technology has implied manageable performance with the best SSIM value of 0.61 dB. Also, the FRCNN-GAN model has provided greater SSIM value of 0.65 dB.
FIGURE 9.5 (a) Input image, (b) Viewed sketch, (c) Forensic image, (d) Sketch synthesis image.
FIGURE 9.6 PSNR analysis of FRCNN-GAN model on the applied two datasets.
Table 9.2 and Figures 9.8 and 9.9 examine the accuracy analysis of the FRCNN-GAN approach on the applied two datasets. Figure 9.8 depicts the accuracy analysis of the FRCNN-GAN method on the applied two datasets. On the applied CUHK dataset, the experimental measures notified that the SCDL and MWF methodologies have exhibited lower accuracy values of 69.85% and 70.84%, respectively. Simultaneously, the MRF and
TABLE 9.2 Accuracy Analysis of the FRCNN-GAN with Existing Methods.
SRGS approaches have offered reasonable accuracy values of 71.30% and 72.45%, respectively. Furthermore, the CNN approach has exhibited moderate function with better accuracy value of 78.53%. Additionally, the FRCNN-GAN technique has concluded with maximum accuracy value of 80.56%. On the applied HIT dataset, the experimental values pointed out that the MRF and MWF models have depicted lower accuracy values of 68.30% and 71.34%, respectively Besides, the SCDL and SRGS methodologies have accomplished reasonable accuracy values of 71.75% and 72.40%, respectively Additionally the CNN approach has shown slightly better performance with maximum accuracy value of 80.21%. Moreover, the FRCNN-GAN approach has showcased a remarkable accuracy value of 81.40%.
Figure 9.9 displays the average analysis of the FRCNN-GAN approach on the applied two datasets. It states that the experimental values pointed out that the MWF and SCDL models have showcased least average values of 69.57% and 70.8%, respectively Concurrently, the
FIGURE 9.9 Average analysis of FRCNN-GAN model with existing methods.
MRF and SRGS models have concluded with better average values of 71.32% and 72.43%, respectively. Also, the CNN approach has depicted considerable function with the higher average value of 79.37%. Additionally, the FRCNN-GAN framework has resulted to a maximum average value of 80.98%.
This chapter has developed a new IoT and 5G-enabled Faster RCNN with GAN called FRCNN-GAN model for FSS. The proposed model initially involves the image capturing process using IoT devices connected to the OpenMV Cam M7 Smart Vision Camera. It captures the images and stores it in the memory. Then, the FRCNN-GAN model executes the face recognition process using Faster RCNN model that identifies the faces properly in the captured image. Then, the GAN-based FSS module is carried out to synthesize the recognized face and generate the face sketch. Finally, the generated face sketch and the sketches that exist in the forensic databases are compared and the most relevant image is identified. An extensive experimentation analysis is carried out on two databases: IIT dataset and CUHK dataset. The simulation outcome ensures the effective performance of the FRCNN-GAN model over the compared methods.
This research work was carried out with financial support of RUSA-Phase 2.0 grant sanctioned vide Letter No. F. 24-51/2014-U, Policy (TNMulti-Gen), Department of Education, Government of India, 9.10.2018.