Використання нейронних мереж для визначення об’єктів на зображенні

dc.citation.epage240
dc.citation.issue1
dc.citation.journalTitleКомп’ютерні системи проектування. Теорія і практика
dc.citation.spage232
dc.contributor.affiliationНаціональний університет “Львівська політехніка”
dc.contributor.affiliationLviv Polytechnic National University
dc.contributor.authorЖеребух, Олег
dc.contributor.authorФармага, Ігор
dc.contributor.authorZherebukh, Oleh
dc.contributor.authorFarmaha, Ihor
dc.coverage.placenameЛьвів
dc.coverage.placenameLviv
dc.date.accessioned2025-03-11T09:52:37Z
dc.date.created2024-02-27
dc.date.issued2024-02-27
dc.description.abstractРозроблено модифіковану модель нейронної мережі на базі Yolo V5 та здійснено порівняння метрик якості класифікації об’єктів на відеозображеннях, побудованих на основі базових існуючих відомих архітектур нейронних мереж. Розглянуто застосування згорткових нейронних мереж для обробки зображень з камер відеоспостереження з метою розробки оптимізованого алгоритму для виявлення та класифікації об’єктів на відеозображеннях. Зроблено аналіз існуючих моделей та архітектур нейронних мереж для аналізу зображень і здійснено їх порівняння. Розглянуто можливості оптимізації процесу аналізу зображень за допомогою використання нейронних мереж.
dc.description.abstractA modified neural network model based on Yolo V5 was developed and the quality metrics of object classification on video images built on the basis of existing known basic neural network architectures were compared. The application of convolutional neural networks for processing images from video surveillance cameras is considered in order to develop an optimized algorithm for detecting and classifying objects on video images. The existing models and architectures of neural networks for image analysis were analyzed and compared. The possibilities of optimizing the process of image analysis using neural networks are considered.
dc.format.extent232-240
dc.format.pages9
dc.identifier.citationЖеребух О. Використання нейронних мереж для визначення об’єктів на зображенні / Олег Жеребух, Ігор Фармага // Комп’ютерні системи проектування. Теорія і практика. — Львів : Видавництво Львівської політехніки, 2024. — Том 6. — № 1. — С. 232–240.
dc.identifier.citationenZherebukh O. Using neural networks to identify objects in an image / Oleh Zherebukh, Ihor Farmaha // Computer Systems of Design. Theory and Practice. — Lviv : Lviv Politechnic Publishing House, 2024. — Vol 6. — No 1. — P. 232–240.
dc.identifier.doidoi.org/10.23939/cds2024.01.232
dc.identifier.urihttps://ena.lpnu.ua/handle/ntb/64116
dc.language.isouk
dc.publisherВидавництво Львівської політехніки
dc.publisherLviv Politechnic Publishing House
dc.relation.ispartofКомп’ютерні системи проектування. Теорія і практика, 1 (6), 2024
dc.relation.ispartofComputer Systems of Design. Theory and Practice, 1 (6), 2024
dc.relation.references[1] Farmaha I., Salo Y. Medical object detection using computer vision tools and methods only // САПР у проектуванні машин. Питання впровадження та навчання : матеріали ХХХ Міжнародної польсько- української науково-технічної конференції (Львів, Україна, 1–2 грудня 2022 р.). – 2022. – C. 18.
dc.relation.references[2] Y.-L. Tian, L. Brown, A. Hampapur, M. Lu, A. Senior and C.-F. Shu, "IBM smart surveillance system (S3): Event based video surveillance system with an open and extensible framework", Mach. Vis. Appl., vol. 19, no. 5, pp. 315-327, Oct. 2008. https://doi.org/10.1007/s00138-008-0153-z
dc.relation.references[3] J. Fernández, L. Calavia, C. Baladrón, J. Aguiar, B. Carro, A. Sánchez-Esguevillas, et al., "An intelligent surveillance platform for large metropolitan areas with dense sensor deployment", Sensors, vol. 13, no. 6, pp. 7414-7442, Jun. 2013. https://doi.org/10.3390/s130607414
dc.relation.references[4] R. Baran, T. Rusc and P. Fornalski, "A smart camera for the surveillance of vehicles in intelligent transportation systems", Multimedia Tools Appl., vol. 75, no. 17, pp. 10471-10493, Sep. 2016. https://doi.org/10.1007/s11042-015-3151-y
dc.relation.references[5] D. Eigenraam and L. J. M. Rothkrantz, "A smart surveillance system of distributed smart multi cameras modelled as agents", Proc. Smart Cities Symp. Prague (SCSP), pp. 1-6, May 2016. https://doi.org/10.1109/SCSP.2016.7501018
dc.relation.references[6] Bosch Intelligent Video Analysis, May 2023, [Електронний ресурс] // Режим доступу: https://www.boschsecurity.com/xc/en/.
dc.relation.references[7] Bhubaneswar’s Smart Safety City Surveillance Project Powered By Honeywell Technologies, May 2023, [Електронний ресурс] // Режим доступу: https://buildings.honeywell.com/content/dam/hbtbt/en/documents/ downloads/Bhubaneswar-CS_0420_V2.pdf.
dc.relation.references[8] Hitachi: Data Integration Helps Smart Cities Fight Crime Iot-hitachi-smart Communities-solution, May 2023, [online] Available: https://www.intel.com/content/dam/www/public/emea/xe/en/documents/.
dc.relation.references[9] Iomniscient, May 2023, [Електронний ресурс] // Режим доступу: https://iomni.ai/oursolutions/.
dc.relation.references[10] E. B. Varghese and S. M. Thampi, "A cognitive IoT smart surveillance framework for crowd behavior analysis", Proc. Int. Conf. Commun. Syst. Netw. (COMSNETS), pp. 360-362, Jan. 2021. https://doi.org/10.1109/COMSNETS51098.2021.9352910
dc.relation.references[11] V. Sharma, M. Gupta, A. Kumar and D. Mishra, "Video processing using deep learning techniques: A systematic literature review", IEEE Access, vol. 9, pp. 139489-139507, 2021. https://doi.org/10.1109/ACCESS.2021.3118541
dc.relation.references[12] New trends in production engineering : колективна монографія. – Warszawa, Poland: Sciendo, 2019. Farmaha I. Wound image segmentation using clustering based algorithms / I. Farmaha, M. Banaś, V. Savchyn, B. Lukashchuk, T. Farmaha. – c.217–225.
dc.relation.references[13]. Jaworski Nazariy, Farmaha Ihor, Farmaha Taras, Savchyn Vasyl, Marikutsa Uliana. Implementation features of wounds visual comparison subsystem // Перспективні технології і методи проектування МЕМС : матеріали XIV Міжнародної науково-технічної конференції, 18–22 квітня, 2018 р., Поляна, Україна. – 2018. – P. 114–117. (Google Scholar, SciVerse SCOPUS, Web of Science). https://doi.org/10.1109/MEMSTECH.2018.8365714
dc.relation.references[14] C. Dhiman and D. K. Vishwakarma, "A review of state-of-the-art techniques for abnormal human activity recognition", Eng. Appl. Artif. Intell., vol. 77, pp. 21-45, Jan. 2019. https://doi.org/10.1016/j.engappai.2018.08.014
dc.relation.references[15] Yang, R., Yu, J., Yin, J., Liu, K., & Xu, S. (2022). A dense r-CNN multi-target instance segmentation model and its application in medical image processing. IET image processing(9), 16.
dc.relation.references[16] Szajna, A., Kostrzewski, M., Ciebiera, K., Stryjski, R., & Sciubba, E. (2021). Application of the deep cnn-based method in industrial system for wire marking identification. Energies(12). https://doi.org/10.3390/en14123659
dc.relation.references[17] Took, C. C., & Mandic, D. (2022). Weight sharing for lms algorithms: convolutional neural networks inspired multichannel adaptive filtering. Digital Signal Processing.
dc.relation.references[18] Weiller, C., Reisert, M., Glauche, V., Musso, M., & Rijntjes, M. (2022). The dual-loop model for combining external and internal worlds in our brain. NeuroImage, 263, 119583. https://doi.org/10.1016/j.neuroimage.2022.119583
dc.relation.references[19] Tremeau A., Borel N. A. region growing and merging algorithm to color segmentation [J]. Pattern Recognition, 1997, 30(7):1191-1203.R. https://doi.org/10.1016/S0031-3203(96)00147-1
dc.relation.references[20] Levinshtein A., Stere A., Kutulakos K. N., etal. TurboPixels: Fast superpixels using geometric ows [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009,31(12):2290-2297.D. https://doi.org/10.1109/TPAMI.2009.96
dc.relation.references[21] Bazgir O, Zhang R, Dhruba S R, et al. Representation of features as image with neighborhood dependencies for compatibility with convolutional neural networks [J]. Nature communications, 2020, 11(1): 4391. https://doi.org/10.1038/s41467-020-18197-y
dc.relation.references[22] Chen L C, Papandreou G, Kokkinos I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(4):834-848. https://doi.org/10.1109/TPAMI.2017.2699184.
dc.relation.referencesen[1] Farmaha I., Salo Y. Medical object detection using computer vision tools and methods only, SAPR u proektuvanni mashyn. Pytannia vprovadzhennia ta navchannia : materialy KhKhKh Mizhnarodnoi polsko- ukrainskoi naukovo-tekhnichnoi konferentsii (Lviv, Ukraine, 1–2 hrudnia 2022 y.), 2022, P. 18.
dc.relation.referencesen[2] Y.-L. Tian, L. Brown, A. Hampapur, M. Lu, A. Senior and C.-F. Shu, "IBM smart surveillance system (S3): Event based video surveillance system with an open and extensible framework", Mach. Vis. Appl., vol. 19, no. 5, pp. 315-327, Oct. 2008. https://doi.org/10.1007/s00138-008-0153-z
dc.relation.referencesen[3] J. Fernández, L. Calavia, C. Baladrón, J. Aguiar, B. Carro, A. Sánchez-Esguevillas, et al., "An intelligent surveillance platform for large metropolitan areas with dense sensor deployment", Sensors, vol. 13, no. 6, pp. 7414-7442, Jun. 2013. https://doi.org/10.3390/s130607414
dc.relation.referencesen[4] R. Baran, T. Rusc and P. Fornalski, "A smart camera for the surveillance of vehicles in intelligent transportation systems", Multimedia Tools Appl., vol. 75, no. 17, pp. 10471-10493, Sep. 2016. https://doi.org/10.1007/s11042-015-3151-y
dc.relation.referencesen[5] D. Eigenraam and L. J. M. Rothkrantz, "A smart surveillance system of distributed smart multi cameras modelled as agents", Proc. Smart Cities Symp. Prague (SCSP), pp. 1-6, May 2016. https://doi.org/10.1109/SCSP.2016.7501018
dc.relation.referencesen[6] Bosch Intelligent Video Analysis, May 2023, [Electronic resource], Access mode: https://www.boschsecurity.com/xc/en/.
dc.relation.referencesen[7] Bhubaneswar’s Smart Safety City Surveillance Project Powered By Honeywell Technologies, May 2023, [Electronic resource], Access mode: https://buildings.honeywell.com/content/dam/hbtbt/en/documents/ downloads/Bhubaneswar-CS_0420_V2.pdf.
dc.relation.referencesen[8] Hitachi: Data Integration Helps Smart Cities Fight Crime Iot-hitachi-smart Communities-solution, May 2023, [online] Available: https://www.intel.com/content/dam/www/public/emea/xe/en/documents/.
dc.relation.referencesen[9] Iomniscient, May 2023, [Electronic resource], Access mode: https://iomni.ai/oursolutions/.
dc.relation.referencesen[10] E. B. Varghese and S. M. Thampi, "A cognitive IoT smart surveillance framework for crowd behavior analysis", Proc. Int. Conf. Commun. Syst. Netw. (COMSNETS), pp. 360-362, Jan. 2021. https://doi.org/10.1109/COMSNETS51098.2021.9352910
dc.relation.referencesen[11] V. Sharma, M. Gupta, A. Kumar and D. Mishra, "Video processing using deep learning techniques: A systematic literature review", IEEE Access, vol. 9, pp. 139489-139507, 2021. https://doi.org/10.1109/ACCESS.2021.3118541
dc.relation.referencesen[12] New trends in production engineering : kolektyvna monograph, Warszawa, Poland: Sciendo, 2019. Farmaha I. Wound image segmentation using clustering based algorithms, I. Farmaha, M. Banaś, V. Savchyn, B. Lukashchuk, T. Farmaha, P.217–225.
dc.relation.referencesen[13]. Jaworski Nazariy, Farmaha Ihor, Farmaha Taras, Savchyn Vasyl, Marikutsa Uliana. Implementation features of wounds visual comparison subsystem, Perspektyvni tekhnolohii i metody proektuvannia MEMS : materialy XIV Mizhnarodnoi naukovo-tekhnichnoi konferentsii, 18–22 kvitnia, 2018 y., Poliana, Ukraine, 2018, P. 114–117. (Google Scholar, SciVerse SCOPUS, Web of Science). https://doi.org/10.1109/MEMSTECH.2018.8365714
dc.relation.referencesen[14] C. Dhiman and D. K. Vishwakarma, "A review of state-of-the-art techniques for abnormal human activity recognition", Eng. Appl. Artif. Intell., vol. 77, pp. 21-45, Jan. 2019. https://doi.org/10.1016/j.engappai.2018.08.014
dc.relation.referencesen[15] Yang, R., Yu, J., Yin, J., Liu, K., & Xu, S. (2022). A dense r-CNN multi-target instance segmentation model and its application in medical image processing. IET image processing(9), 16.
dc.relation.referencesen[16] Szajna, A., Kostrzewski, M., Ciebiera, K., Stryjski, R., & Sciubba, E. (2021). Application of the deep cnn-based method in industrial system for wire marking identification. Energies(12). https://doi.org/10.3390/en14123659
dc.relation.referencesen[17] Took, C. C., & Mandic, D. (2022). Weight sharing for lms algorithms: convolutional neural networks inspired multichannel adaptive filtering. Digital Signal Processing.
dc.relation.referencesen[18] Weiller, C., Reisert, M., Glauche, V., Musso, M., & Rijntjes, M. (2022). The dual-loop model for combining external and internal worlds in our brain. NeuroImage, 263, 119583. https://doi.org/10.1016/j.neuroimage.2022.119583
dc.relation.referencesen[19] Tremeau A., Borel N. A. region growing and merging algorithm to color segmentation [J]. Pattern Recognition, 1997, 30(7):1191-1203.R. https://doi.org/10.1016/S0031-3203(96)00147-1
dc.relation.referencesen[20] Levinshtein A., Stere A., Kutulakos K. N., etal. TurboPixels: Fast superpixels using geometric ows [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009,31(12):2290-2297.D. https://doi.org/10.1109/TPAMI.2009.96
dc.relation.referencesen[21] Bazgir O, Zhang R, Dhruba S R, et al. Representation of features as image with neighborhood dependencies for compatibility with convolutional neural networks [J]. Nature communications, 2020, 11(1): 4391. https://doi.org/10.1038/s41467-020-18197-y
dc.relation.referencesen[22] Chen L C, Papandreou G, Kokkinos I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(4):834-848. https://doi.org/10.1109/TPAMI.2017.2699184.
dc.relation.urihttps://doi.org/10.1007/s00138-008-0153-z
dc.relation.urihttps://doi.org/10.3390/s130607414
dc.relation.urihttps://doi.org/10.1007/s11042-015-3151-y
dc.relation.urihttps://doi.org/10.1109/SCSP.2016.7501018
dc.relation.urihttps://www.boschsecurity.com/xc/en/
dc.relation.urihttps://buildings.honeywell.com/content/dam/hbtbt/en/documents/
dc.relation.urihttps://www.intel.com/content/dam/www/public/emea/xe/en/documents/
dc.relation.urihttps://iomni.ai/oursolutions/
dc.relation.urihttps://doi.org/10.1109/COMSNETS51098.2021.9352910
dc.relation.urihttps://doi.org/10.1109/ACCESS.2021.3118541
dc.relation.urihttps://doi.org/10.1109/MEMSTECH.2018.8365714
dc.relation.urihttps://doi.org/10.1016/j.engappai.2018.08.014
dc.relation.urihttps://doi.org/10.3390/en14123659
dc.relation.urihttps://doi.org/10.1016/j.neuroimage.2022.119583
dc.relation.urihttps://doi.org/10.1016/S0031-3203(96)00147-1
dc.relation.urihttps://doi.org/10.1109/TPAMI.2009.96
dc.relation.urihttps://doi.org/10.1038/s41467-020-18197-y
dc.relation.urihttps://doi.org/10.1109/TPAMI.2017.2699184
dc.rights.holder© Національний університет “Львівська політехніка”, 2024
dc.rights.holder© Жеребух О., Фармага І., 2024
dc.subjectзгорткові нейронні мережі
dc.subjectCNN
dc.subjectвиявлення об’єктів
dc.subjectшвидкодія обробки відеозображень
dc.subjectвластивості та ознаки зображень
dc.subjectconvolutional neural networks
dc.subjectCNN
dc.subjectobject detection
dc.subjectspeed of video image processing
dc.subjectimage properties and features
dc.titleВикористання нейронних мереж для визначення об’єктів на зображенні
dc.title.alternativeUsing neural networks to identify objects in an image
dc.typeArticle

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
2024v6n1_Zherebukh_O-Using_neural_networks_232-240.pdf
Size:
1.24 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
2024v6n1_Zherebukh_O-Using_neural_networks_232-240__COVER.png
Size:
459.63 KB
Format:
Portable Network Graphics

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.8 KB
Format:
Plain Text
Description: