Performance evaluation of Self-Quotient image methods

dc.citation.epage14
dc.citation.issue1
dc.citation.journalTitleУкраїнський журнал інформаційних технологій
dc.citation.spage8
dc.citation.volume2
dc.contributor.affiliationЛьвівський національний університет ім. Івана Франка
dc.contributor.affiliationIvan Franko National University of Lviv
dc.contributor.authorПарубочий, В. О.
dc.contributor.authorШувар, Роман Ярославович
dc.contributor.authorParubochyi, V. O.
dc.contributor.authorShuvar, R. Ya.
dc.coverage.placenameЛьвів
dc.coverage.placenameLviv
dc.date.accessioned2022-05-24T11:10:09Z
dc.date.available2022-05-24T11:10:09Z
dc.date.created2020-09-23
dc.date.issued2020-09-23
dc.description.abstractНормалізація освітлення є дуже важливою проблемою в системах розпізнавання зображень, оскільки різні умови освітлення можуть істотно змінити результати розпізнавання, а нормалізація освітлення дає змогу мінімізувати негативні наслідки різних умов освітлення. У цій роботі ми оцінюємо ефективність розпізнавання декількох методів нормалізації освітлення, заснованих на методі самооцінювання зображення SQI (англ. Self-Quotient Image method), запровадженому Haitao Wang, Stan Z. Li, Yangsheng Wang, та Jianjun Zhang. Для оцінки ми вибрали оригінальну реалізацію та найперспективніші модифікації оригінального методу SQI, в т.ч. й метод Gabor Quotient ImagE(GQI), запропонований Sanun Srisuk та Amnart Petpon у 2008 році, а також метод Fast Self-Quotient ImagE(FSQI) та його модифікації, запропоновані авторами статті в попередніх роботах. У цій роботі ми запропонували модель оцінки, яка використовує Cropped Extended Yale Face Database B, що дає змогу показати відмінність результатів розпізнавання для різних умов освітлення. Також ми перевіряємо всі результати за допомогою двох класифікаторів: класифікатора найближчих сусідів (англ. Nearest Neighbor Classifier) та лінійного класифікатора опорних векторів (англ. Linear Support Vector Classifier). Такий підхід дає змогу не тільки обчислити точність розпізнавання для кожного методу та вибрати найкращий метод, але й показати важливість правильного вибору методу класифікації, який може мати значний вплив на результати розпізнавання. Нам вдалося показати значне зменшення точності розпізнавання для необроблених (RAW) зображень із збільшенням кута між джерелом освітлення та нормаллю до об'єкта. З іншого боку, наші експерименти показали майже рівномірний розподіл точності розпізнавання для зображень, оброблених методами нормалізації освітлення на підставі методу SQI. Ще одним отриманим, проте очікуваним результатом, представленим у цій роботі, є підвищення точності розпізнавання із збільшенням розміру ядра фільтра. Однак великі розміри ядра фільтра є більш обчислювально-затратні і можуть спричинити негативні ефекти на вихідних зображеннях. Окрім цього, в наших експериментах було показано, що друга модифікація методу FSQI, яку ми скорочено позначаємо як FSQI3, краща майже в усіх випадках для всіх розмірів ядра фільтра, особливо якщо ми використовуємо лінійний класифікатор опорних векторів для класифікації.
dc.description.abstractLighting Normalization is an especially important issue in the image recognitions systems since different illumination conditions can significantly change the recognition results, and the lighting normalization allows minimizing negative effects of various illumination conditions. In this paper, we are evaluating the recognition performance of several lighting normalization methods based on the Self-Quotient Image(SQI) method introduced by Haitao Wang, Stan Z. Li, Yangsheng Wang, and Jianjun Zhang. For evaluation, we chose the original implementation and the most perspective latest modifications of the original SQI method, including the Gabor Quotient Image(GQI) method introduced by Sanun Srisuk and Amnart Petpon in 2008, and the Fast Self-Quotient Image(FSQI) method and its modifications proposed by authors in previous works. We are proposing an evaluation framework which uses the Cropped Extended Yale Face Database B, which allows showing the difference of the recognition results for different illumination conditions. Also, we are testing all results using two classifiers: Nearest Neighbor Classifier and Linear Support Vector Classifier. This approach allows us not only to calculate recognition accuracy for each method and select the best method but also show the importance of the proper choice of the classification method, which can have a significant influence on recognition results. We were able to show the significant decreasing of recognition accuracy for un-processed (RAW) images with increasing the angle between the lighting source and the normal to the object. From the other side, our experiments had shown the almost uniform distribution of the recognition accuracy for images processed by lighting normalization methods based on the SQI method. Another showed but expected result represented in this paper is the increasing of the recognition accuracy with the increasing of the filter kernel size. However, the large filter kernel sizes are much more computationally expensive and can produce negative effects on output images. Also, we were shown in our experiments, that the second modification of the FSQI method, called FSQI3, is better almost in all cases for all filter kernel sizes, especially, if we use Linear Support Vector Classifier for classification.
dc.format.extent8-14
dc.format.pages7
dc.identifier.citationParubochyi V. O. Performance evaluation of Self-Quotient image methods / V. O. Parubochyi, R. Ya. Shuvar // Український журнал інформаційних технологій. — Львів : Видавництво Львівської політехніки, 2020. — Том 2. — № 1. — С. 8–14.
dc.identifier.citationenParubochyi V. O. Performance evaluation of Self-Quotient image methods / V. O. Parubochyi, R. Ya. Shuvar // Ukrainian Journal of Information Technology. — Lviv : Vydavnytstvo Lvivskoi politekhniky, 2020. — Vol 2. — No 1. — P. 8–14.
dc.identifier.urihttps://ena.lpnu.ua/handle/ntb/56896
dc.language.isoen
dc.publisherВидавництво Львівської політехніки
dc.relation.ispartofУкраїнський журнал інформаційних технологій, 1 (2), 2020
dc.relation.ispartofUkrainian Journal of Information Technology, 1 (2), 2020
dc.relation.references[1] Adini, Y, Moses, Y., & Ullman, S. (1997). Face recognition: the problem of compensating for changes in illumination direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 721–732. https://doi.org/10.1109/34.598229
dc.relation.references[2] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720. https://doi.org/10.1109/34.598228
dc.relation.references[3] Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1996). A Training Algorithm for Optimal Margin Classifier. Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT 92), Association for Computing Machinery, New York, NY, USA, 144–152. https://doi.org/10.1145/130385.130401
dc.relation.references[4] Chen, T., Yin, W., Zhou, X. S., Comaniciu, D., & Huang, T. S. (2005). Illumination normalization for face recognition and uneven background correction using total variation based image models. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05), 2, 532–539, San Diego, CA, USA. https://doi.org/10.1109/CVPR.2005.181
dc.relation.references[5] Cortes, C., & Vapnik, V. (2004). Support-Vector Networks. Machine Learning, 20, 273–297. https://doi.org/10.1007/BF00994018
dc.relation.references[6] Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9, 1871–1874.
dc.relation.references[7] Georghiades, A. S., Belhumeur, P. N., & Kriegman, D. J. (2001). From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. IEEE Transactions on Pattern Analysis and Machine Intelligence,23(6), 643–660. https://doi.org/10.1109/34.927464
dc.relation.references[8] Georghiades, A. S., Kriegman, D. J., & Belhumeur, P. N. (1998). Illumination Cones for Recognition under Variable Lighting: Faces. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 98), 52–59. https://doi.org/10.1109/CVPR.1998.698587
dc.relation.references[9] Gonzalez, R. C., & Woods, R. E. (2001). Digital Image Processing (2nd. ed.). Addison-Wesley Longman Publishing Co. Inc., USA., 793.
dc.relation.references[10] Gross, R., & Brajovie, V. (2003). An Image Preprocessing Algorithm for Illumination Invariant Face Recognition. 4th International Conference on Audio and Video Based Biometric Person Authentication (AVBPA), 10–18.
dc.relation.references[11] Gryciuk, Yu. I., & Grytsyuk, P. Yu. (2015). Contemporary problems of scientific evaluation of the application software quality. Scientific Bulletin of UNFU, 25(7), 284–294. https://doi.org/10.15421/40250745
dc.relation.references[12] Hallinan, P. W. (1994). A low-dimensional representation of human faces for arbitrary lighting conditions. 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 995–999. https://doi.org/10.1109/CVPR.1994.323941
dc.relation.references[13] Heusch, G., Cardinaux, F., & Marcel, S. (2005). Lighting Normalization Algorithms for Face Verification. IDIAP.
dc.relation.references[14] Hrytsiuk, Yuriy, & Bilas, Orest. (2019). Visualization of Software Quality Expert Assessment. IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT 2019), (Vol. 2, pp. 156–160), 17–20 September, 2019. https://doi.org/10.1109/stc-csit.2019.8929778
dc.relation.references[15] Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 6(7), 965–976. https://doi.org/10.1109/83.597272
dc.relation.references[16] Land, E. H., & McCann, J. J. (1971). Lightness and Retinex Theory. Journal of the Optical Society of America, 61, 1–11. https://doi.org/10.1364/josa.61.000001
dc.relation.references[17] Lee, K. C., Ho, J., & Kriegman, D. J. (2005). Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5), 684–698. https://doi.org/10.1109/TPAMI.2005.92
dc.relation.references[18] Makwana, R. M. (2010). Illumination invariant face recognition: A survey of passive methods. Procedia Computer Science, 2, 101–110. https://doi.org/10.1016/j.procs.2010.11.013
dc.relation.references[19] Muruganantham, S., & Jebarajan, T. (2011). Exaggerate Self Quotient Image Model for Face Recognition Enlist Subspace Method. International Journal of Computer Science and Information Security (IJCSIS), 9(6), 264–269
dc.relation.references[20] Nimeroff, J. S., Simoncelli, E., & Dorsey, J. (1994). Efficient rerendering of naturally illuminated environments. Proceedings of the Fifth Annual Eurographics Symposium on Rendering.
dc.relation.references[21] Nishiyama, M., Kozakaya, T., & Yamaguchi, O. (2008). Illumination Normalization using Quotient Image-based Techniques, Recent Advances in Face Recognition Kresimir- Delac, IntechOpen, 97–108. https://doi.org/10.5772/6396
dc.relation.references[22] Parubochyi, V., & Shuvar, R. (2019). Normalization Modifications for Fast Self-Quotient Image Method. 2019 XIth International Scientific and Practical Conference on Electronics and Information Technologies (ELIT), Lviv, Ukraine, 179–182. https://doi.org/10.1109/ELIT.2019.8892347
dc.relation.references[23] Parubochyi, V., & Shuwar, R. (2018). Fast self-quotient image method for lighting normalization based on modified Gaussian filter kernel. The Imaging Science Journal, 66(8), 471–478. https://doi.org/10.1080/13682199.2018.1517857
dc.relation.references[24] Pizer, M. S., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Romeny, B. ter H., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355–368. https://doi.org/10.1016/S0734-189X(87)80186-X
dc.relation.references[25] Reza, A. M. (2004). Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. The Journal of VLSI Signal Processing- Systems for Signal, Image, and Video Technology, 38, 35–44. https://doi.org/10.1023/B:VLSI.0000028532.53893.82
dc.relation.references[26] Riklin-Raviv, T., & Shashua, A. (1999). The quotient image: Class based recognition and synthesis under varying illumination conditions. Proceedings of 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 566–571. https://doi.org/10.1109/CVPR.1999.784968
dc.relation.references[27] Shashua, A., & Riklin-Raviv, T. (2001). The Quotient Image: Class-Based Re-Rendering and Recognition with Varying Illuminations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 129–139. https://doi.org/10.1109/34.908964
dc.relation.references[28] Srisuk, S., & Petpon, A. (2008). A Gabor Quotient Image for Face Recognition under Varying Illumination. Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II (ISVC 08), Springer-Verlag, Berlin, Heidelberg, pp. 511–520. https://doi.org/10.1007/978-3-540-89646-3_50
dc.relation.references[29] Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86. https://doi.org/10.1162/jocn.1991.3.1.71
dc.relation.references[30] Wang, H., Li, S. Z., & Wang, Y. (2004). Face recognition under varying lighting conditions using self quotient image. Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, South Korea,819–824. https://doi.org/10.1109/AFGR.2004.1301635
dc.relation.references[31] Wang, H., Li, S. Z., & Wang, Y. (2004). Generalized quotient image. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 498–505. https://doi.org/10.1109/CVPR.2004.1315205
dc.relation.references[32] Wang, H., Li, S. Z., Wang, Y., & Zhang, J. (2004). Self quotient image for face recognition. 2004 International Conference on Image Processing (ICIP 04), Singapore, 2, 1397–1400. https://doi.org/10.1109/ICIP.2004.1419763
dc.relation.references[33] Xiao-guang, H., Jie, T., Li-fang, W., Yao-yao, Z., & Xin, Y. (2007). Illumination Normalization with Morphological Quotient Image. Journal of Software, 18(9), 2318–2325. https://doi.org/10.1360/jos182318
dc.relation.references[34] Zou, X., Kittler, J., & Messer, K. (2007). Illumination Invariant Face Recognition: A Survey. First IEEE International Conference on Biometrics: Theory, Applications, and Systems, 1–8. https://doi.org/10.1109/BTAS.2007.4401921
dc.relation.referencesen[1] Adini, Y, Moses, Y., & Ullman, S. (1997). Face recognition: the problem of compensating for changes in illumination direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 721–732. https://doi.org/10.1109/34.598229
dc.relation.referencesen[2] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720. https://doi.org/10.1109/34.598228
dc.relation.referencesen[3] Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1996). A Training Algorithm for Optimal Margin Classifier. Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT 92), Association for Computing Machinery, New York, NY, USA, 144–152. https://doi.org/10.1145/130385.130401
dc.relation.referencesen[4] Chen, T., Yin, W., Zhou, X. S., Comaniciu, D., & Huang, T. S. (2005). Illumination normalization for face recognition and uneven background correction using total variation based image models. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05), 2, 532–539, San Diego, CA, USA. https://doi.org/10.1109/CVPR.2005.181
dc.relation.referencesen[5] Cortes, C., & Vapnik, V. (2004). Support-Vector Networks. Machine Learning, 20, 273–297. https://doi.org/10.1007/BF00994018
dc.relation.referencesen[6] Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9, 1871–1874.
dc.relation.referencesen[7] Georghiades, A. S., Belhumeur, P. N., & Kriegman, D. J. (2001). From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. IEEE Transactions on Pattern Analysis and Machine Intelligence,23(6), 643–660. https://doi.org/10.1109/34.927464
dc.relation.referencesen[8] Georghiades, A. S., Kriegman, D. J., & Belhumeur, P. N. (1998). Illumination Cones for Recognition under Variable Lighting: Faces. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 98), 52–59. https://doi.org/10.1109/CVPR.1998.698587
dc.relation.referencesen[9] Gonzalez, R. C., & Woods, R. E. (2001). Digital Image Processing (2nd. ed.). Addison-Wesley Longman Publishing Co. Inc., USA., 793.
dc.relation.referencesen[10] Gross, R., & Brajovie, V. (2003). An Image Preprocessing Algorithm for Illumination Invariant Face Recognition. 4th International Conference on Audio and Video Based Biometric Person Authentication (AVBPA), 10–18.
dc.relation.referencesen[11] Gryciuk, Yu. I., & Grytsyuk, P. Yu. (2015). Contemporary problems of scientific evaluation of the application software quality. Scientific Bulletin of UNFU, 25(7), 284–294. https://doi.org/10.15421/40250745
dc.relation.referencesen[12] Hallinan, P. W. (1994). A low-dimensional representation of human faces for arbitrary lighting conditions. 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 995–999. https://doi.org/10.1109/CVPR.1994.323941
dc.relation.referencesen[13] Heusch, G., Cardinaux, F., & Marcel, S. (2005). Lighting Normalization Algorithms for Face Verification. IDIAP.
dc.relation.referencesen[14] Hrytsiuk, Yuriy, & Bilas, Orest. (2019). Visualization of Software Quality Expert Assessment. IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT 2019), (Vol. 2, pp. 156–160), 17–20 September, 2019. https://doi.org/10.1109/stc-csit.2019.8929778
dc.relation.referencesen[15] Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 6(7), 965–976. https://doi.org/10.1109/83.597272
dc.relation.referencesen[16] Land, E. H., & McCann, J. J. (1971). Lightness and Retinex Theory. Journal of the Optical Society of America, 61, 1–11. https://doi.org/10.1364/josa.61.000001
dc.relation.referencesen[17] Lee, K. C., Ho, J., & Kriegman, D. J. (2005). Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5), 684–698. https://doi.org/10.1109/TPAMI.2005.92
dc.relation.referencesen[18] Makwana, R. M. (2010). Illumination invariant face recognition: A survey of passive methods. Procedia Computer Science, 2, 101–110. https://doi.org/10.1016/j.procs.2010.11.013
dc.relation.referencesen[19] Muruganantham, S., & Jebarajan, T. (2011). Exaggerate Self Quotient Image Model for Face Recognition Enlist Subspace Method. International Journal of Computer Science and Information Security (IJCSIS), 9(6), 264–269
dc.relation.referencesen[20] Nimeroff, J. S., Simoncelli, E., & Dorsey, J. (1994). Efficient rerendering of naturally illuminated environments. Proceedings of the Fifth Annual Eurographics Symposium on Rendering.
dc.relation.referencesen[21] Nishiyama, M., Kozakaya, T., & Yamaguchi, O. (2008). Illumination Normalization using Quotient Image-based Techniques, Recent Advances in Face Recognition Kresimir- Delac, IntechOpen, 97–108. https://doi.org/10.5772/6396
dc.relation.referencesen[22] Parubochyi, V., & Shuvar, R. (2019). Normalization Modifications for Fast Self-Quotient Image Method. 2019 XIth International Scientific and Practical Conference on Electronics and Information Technologies (ELIT), Lviv, Ukraine, 179–182. https://doi.org/10.1109/ELIT.2019.8892347
dc.relation.referencesen[23] Parubochyi, V., & Shuwar, R. (2018). Fast self-quotient image method for lighting normalization based on modified Gaussian filter kernel. The Imaging Science Journal, 66(8), 471–478. https://doi.org/10.1080/13682199.2018.1517857
dc.relation.referencesen[24] Pizer, M. S., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Romeny, B. ter H., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355–368. https://doi.org/10.1016/S0734-189X(87)80186-X
dc.relation.referencesen[25] Reza, A. M. (2004). Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. The Journal of VLSI Signal Processing- Systems for Signal, Image, and Video Technology, 38, 35–44. https://doi.org/10.1023/B:VLSI.0000028532.53893.82
dc.relation.referencesen[26] Riklin-Raviv, T., & Shashua, A. (1999). The quotient image: Class based recognition and synthesis under varying illumination conditions. Proceedings of 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 566–571. https://doi.org/10.1109/CVPR.1999.784968
dc.relation.referencesen[27] Shashua, A., & Riklin-Raviv, T. (2001). The Quotient Image: Class-Based Re-Rendering and Recognition with Varying Illuminations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 129–139. https://doi.org/10.1109/34.908964
dc.relation.referencesen[28] Srisuk, S., & Petpon, A. (2008). A Gabor Quotient Image for Face Recognition under Varying Illumination. Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II (ISVC 08), Springer-Verlag, Berlin, Heidelberg, pp. 511–520. https://doi.org/10.1007/978-3-540-89646-3_50
dc.relation.referencesen[29] Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86. https://doi.org/10.1162/jocn.1991.3.1.71
dc.relation.referencesen[30] Wang, H., Li, S. Z., & Wang, Y. (2004). Face recognition under varying lighting conditions using self quotient image. Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, South Korea,819–824. https://doi.org/10.1109/AFGR.2004.1301635
dc.relation.referencesen[31] Wang, H., Li, S. Z., & Wang, Y. (2004). Generalized quotient image. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 498–505. https://doi.org/10.1109/CVPR.2004.1315205
dc.relation.referencesen[32] Wang, H., Li, S. Z., Wang, Y., & Zhang, J. (2004). Self quotient image for face recognition. 2004 International Conference on Image Processing (ICIP 04), Singapore, 2, 1397–1400. https://doi.org/10.1109/ICIP.2004.1419763
dc.relation.referencesen[33] Xiao-guang, H., Jie, T., Li-fang, W., Yao-yao, Z., & Xin, Y. (2007). Illumination Normalization with Morphological Quotient Image. Journal of Software, 18(9), 2318–2325. https://doi.org/10.1360/jos182318
dc.relation.referencesen[34] Zou, X., Kittler, J., & Messer, K. (2007). Illumination Invariant Face Recognition: A Survey. First IEEE International Conference on Biometrics: Theory, Applications, and Systems, 1–8. https://doi.org/10.1109/BTAS.2007.4401921
dc.relation.urihttps://doi.org/10.1109/34.598229
dc.relation.urihttps://doi.org/10.1109/34.598228
dc.relation.urihttps://doi.org/10.1145/130385.130401
dc.relation.urihttps://doi.org/10.1109/CVPR.2005.181
dc.relation.urihttps://doi.org/10.1007/BF00994018
dc.relation.urihttps://doi.org/10.1109/34.927464
dc.relation.urihttps://doi.org/10.1109/CVPR.1998.698587
dc.relation.urihttps://doi.org/10.15421/40250745
dc.relation.urihttps://doi.org/10.1109/CVPR.1994.323941
dc.relation.urihttps://doi.org/10.1109/stc-csit.2019.8929778
dc.relation.urihttps://doi.org/10.1109/83.597272
dc.relation.urihttps://doi.org/10.1364/josa.61.000001
dc.relation.urihttps://doi.org/10.1109/TPAMI.2005.92
dc.relation.urihttps://doi.org/10.1016/j.procs.2010.11.013
dc.relation.urihttps://doi.org/10.5772/6396
dc.relation.urihttps://doi.org/10.1109/ELIT.2019.8892347
dc.relation.urihttps://doi.org/10.1080/13682199.2018.1517857
dc.relation.urihttps://doi.org/10.1016/S0734-189X(87)80186-X
dc.relation.urihttps://doi.org/10.1023/B:VLSI.0000028532.53893.82
dc.relation.urihttps://doi.org/10.1109/CVPR.1999.784968
dc.relation.urihttps://doi.org/10.1109/34.908964
dc.relation.urihttps://doi.org/10.1007/978-3-540-89646-3_50
dc.relation.urihttps://doi.org/10.1162/jocn.1991.3.1.71
dc.relation.urihttps://doi.org/10.1109/AFGR.2004.1301635
dc.relation.urihttps://doi.org/10.1109/CVPR.2004.1315205
dc.relation.urihttps://doi.org/10.1109/ICIP.2004.1419763
dc.relation.urihttps://doi.org/10.1360/jos182318
dc.relation.urihttps://doi.org/10.1109/BTAS.2007.4401921
dc.rights.holder© Національний університет “Львівська політехніка”, 2020
dc.subjectнормалізація освітлення
dc.subjectметод самооцінювання зображень
dc.subjectSQI
dc.subjectфільтр Гауса
dc.subjectфільтр Габора
dc.subjectметод Габора для самооцінювання зображень
dc.subjectGQI
dc.subjectметод швидкої самооцінювання зображень
dc.subjectFSQI
dc.subjectlighting normalization
dc.subjectillumination normalization
dc.subjectself-quotient image
dc.subjectSQI
dc.subjectGaussian filter
dc.subjectGabor filter
dc.subjectGabor quotient image
dc.subjectGQI
dc.subjectfast self-quotient image
dc.subjectFSQI
dc.subjectillumination invariant face recognition
dc.titlePerformance evaluation of Self-Quotient image methods
dc.title.alternativeОцінка ефективності методів самооцінювання зображення
dc.typeArticle

Files

Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
2020v2n1_Parubochyi_V_O-Performance_evaluation_8-14.pdf
Size:
1.5 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
2020v2n1_Parubochyi_V_O-Performance_evaluation_8-14__COVER.png
Size:
1.84 MB
Format:
Portable Network Graphics
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.83 KB
Format:
Plain Text
Description: