Improving pedestrian segmentation using region proposal-based CNN semantic segmentation
dc.citation.epage | 863 | |
dc.citation.issue | 3 | |
dc.citation.journalTitle | Математичне моделювання та комп'ютинг | |
dc.citation.spage | 854 | |
dc.contributor.affiliation | Університет Каді Айяд | |
dc.contributor.affiliation | Університет Гюстава Ейфеля | |
dc.contributor.affiliation | Cadi Ayyad University | |
dc.contributor.affiliation | Gustave Eiffel University | |
dc.contributor.author | Лахгазі, М. Дж. | |
dc.contributor.author | Аргул, П. | |
dc.contributor.author | Хакім, А. | |
dc.contributor.author | Lahgazi, M. J. | |
dc.contributor.author | Argoul, P. | |
dc.contributor.author | Hakim, A. | |
dc.coverage.placename | Львів | |
dc.coverage.placename | Lviv | |
dc.date.accessioned | 2025-03-04T12:17:29Z | |
dc.date.created | 2023-02-28 | |
dc.date.issued | 2023-02-28 | |
dc.description.abstract | Сегментація пішоходів є критично важливим завданням комп’ютерного зору, але моделям сегментації може бути складно точно класифікувати пішоходів на зображеннях із складним фоном і змінами яскравості, а також оклюзіями. Ця проблема ще більше ускладнюється для стиснутих моделей, які були розроблені для роботи з високими обчислювальними вимогами глибинних нейронних мереж. Щоб вирішити ці проблеми, запропоновано новий підхід, який інтегрує структуру на основі регіональних пропозицій у процес сегментації. Щоб оцінити продуктивність запропонованого фреймворку, проведено експерименти з набором даних PASCAL VOC, який є складним фоном. Використовуємо дві різні моделі сегментації, UNet і SqueezeUNet, щоб оцінити вплив регіональних пропозицій на ефективність сегментації. Проведені експерименти показують, що включення регіональних пропозицій значно покращує точність сегментації та зменшує кількість помилкових пікселів у фоні, що призводить до кращої загальної продуктивності. Зокрема, модель SqueezeUNet забезпечує середнє значення Intersection over Union (mIoU) у розмірі 0.682, що на 12% краще порівняно з базовою моделлю SqueezeUNet без регіональних пропозицій. Подібно модель UNet досягає 0.678, що є покращенням на 13% порівняно з базовою моделлю UNet без регіональних пропозицій. | |
dc.description.abstract | Pedestrian segmentation is a critical task in computer vision, but it can be challenging for segmentation models to accurately classify pedestrians in images with challenging backgrounds and luminosity changes, as well as occlusions. This challenge is further compounded for compressed models that were designed to deal with the high computational demands of deep neural networks. To address these challenges, we propose a novel approach that integrates a region proposal-based framework into the segmentation process. To evaluate the performance of the proposed framework, we conduct experiments on the PASCAL VOC dataset, which presents challenging backgrounds. We use two different segmentation models, UNet and SqueezeUNet, to evaluate the impact of region proposals on segmentation performance. Our experiments show that the incorporation of region proposals significantly improves segmentation accuracy and reduces false positive pixels in the background, leading to better overall performance. Specifically, the SqueezeUNet model achieves a mean Intersection over Union (mIoU) of 0.682, which is a 12% improvement over the baseline SqueezeUNet model without region proposals. Similarly, the UNet model achieves a mIoU of 0.678, which is a 13% improvement over the baseline UNet model without region proposals. | |
dc.format.extent | 854-863 | |
dc.format.pages | 10 | |
dc.identifier.citation | Lahgazi M. J. Improving pedestrian segmentation using region proposal-based CNN semantic segmentation / M. J. Lahgazi, P. Argoul, A. Hakim // Mathematical Modeling and Computing. — Lviv : Lviv Politechnic Publishing House, 2023. — Vol 10. — No 3. — P. 854–863. | |
dc.identifier.citationen | Lahgazi M. J. Improving pedestrian segmentation using region proposal-based CNN semantic segmentation / M. J. Lahgazi, P. Argoul, A. Hakim // Mathematical Modeling and Computing. — Lviv : Lviv Politechnic Publishing House, 2023. — Vol 10. — No 3. — P. 854–863. | |
dc.identifier.doi | doi.org/10.23939/mmc2023.03.854 | |
dc.identifier.uri | https://ena.lpnu.ua/handle/ntb/63522 | |
dc.language.iso | en | |
dc.publisher | Видавництво Львівської політехніки | |
dc.publisher | Lviv Politechnic Publishing House | |
dc.relation.ispartof | Математичне моделювання та комп'ютинг, 3 (10), 2023 | |
dc.relation.ispartof | Mathematical Modeling and Computing, 3 (10), 2023 | |
dc.relation.references | [1] Minaee S., Boykov Y. Y., Porikli F., Plaza A. J., Kehtarnavaz N., Terzopoulos D. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 44 (7), 3523–3542 (2021). | |
dc.relation.references | [2] Hearst M. A., Dumais S. T., Osuna E., Platt J., Scholkopf B. Support vector machines. IEEE Intelligent Systems and their Applications. 13 (4), 18–28 (1998). | |
dc.relation.references | [3] Lahgazi M. J., Hakim A., Argoul P. An adaptive wavelet shrinkage based accumulative frame differencing model for motion segmentation. Mathematical Modeling and Computing. 10 (1), 159–170 (2023). | |
dc.relation.references | [4] Dalal N., Triggs B. Histograms of oriented gradients for human detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). 1, 886–893 (2005). | |
dc.relation.references | [5] Ashok V., Balakumaran T., Gowrishankar C., Vennila I. L. A., Nirmal Kumar A. The Fast Haar Wavelet Transform for Signal & Image Processing. International Journal of Computer Science and Information Security. 7 (2010). | |
dc.relation.references | [6] Ren S., He K., Girshick R., Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. 39 (6), 1137–1149 (2015). | |
dc.relation.references | [7] Redmon J., Divvala S., Girshick R., Farhadi A. You only look once: Unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 779–788 (2016). | |
dc.relation.references | [8] Bochkovskiy A., Wang C.-Y., Liao H.-Y. M. YOLOv4: Optimal Speed and Accuracy of Object Detection. Preprint arXiv:2004.10934 (2020). | |
dc.relation.references | [9] Law H., Deng J. CornerNet: Detecting objects as paired keypoints. Proceedings of the European Conference on Computer Vision (ECCV). 734–750 (2018). | |
dc.relation.references | [10] Bolya D., Zhou C., Xiao F., Lee Y. J. YOLACT: Real-time instance segmentation. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 9156–9165 (2019). | |
dc.relation.references | [11] Pavani G., Biswal B., Gandhi T. K. Multistage DPIRef-Net: An effective network for semantic segmentation of arteries and veins from retinal surface. Neuroscience Informatics. 2 (4), 100074 (2022). | |
dc.relation.references | [12] Biswal B., Geetha P. P., Prasanna T., Karn P. K. Robust segmentation of exudates from retinal surface using M-CapsNet via EM routing. Biomedical Signal Processing and Control. 68, 102770 (2021). | |
dc.relation.references | [13] Xie H.-X., Lin C.-Y., Zheng H., Lin P.-Y. An UNet-based head shoulder segmentation network. 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). 1–2 (2018). | |
dc.relation.references | [14] Wang P., Bai X. Thermal infrared pedestrian segmentation based on conditional GAN. IEEE Transactions on Image Processing. 28 (12), 6007–6021 (2019). | |
dc.relation.references | [15] Baheti B., Innani S., Gajre S., Talbar S. Eff-unet: A novel architecture for semantic segmentation in unstructured environment. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1473–1481 (2020). | |
dc.relation.references | [16] Liu T., Stathaki T. Faster R-CNN for robust pedestrian detection using semantic segmentation network. Frontiers in Neurorobotics. 12, 64 (2018). | |
dc.relation.references | [17] Yuan L., Qiu Z. Mask-RCNN with spatial attention for pedestrian segmentation in cyber-physical systems. Computer Communications. 180, 109–114 (2021). | |
dc.relation.references | [18] Syed A., Morris B. T. CNN, segmentation or semantic embeddings: evaluating scene context for trajectory prediction. International Symposium on Visual Computing. 706–717 (2020). | |
dc.relation.references | [19] Gao G., Gao J., Liu Q., Wang Q., Wang Y. CNN-based density estimation and crowd counting: A survey. Preprint arXiv:2003.12783 (2020). | |
dc.relation.references | [20] Luo J.-H., Zhang H., Zhou H.-Y., Xie C.-W., Wu J., Lin W. ThiNet: pruning CNN filters for a thinner net. IEEE transactions on pattern analysis and machine intelligence. 41 (10), 2525–2538 (2018). | |
dc.relation.references | [21] Reed R. Pruning algorithms-a survey. IEEE Transactions on Neural Networks. 4 (5), 740–747 (1993). | |
dc.relation.references | [22] Han S., Pool J., Tran J., Dally W. Learning both weights and connections for efficient neural network. Proceedings of the 28th International Conference on Neural Information Processing Systems. 1, 1135–1143 (2015). | |
dc.relation.references | [23] Li H., Kadav A., Durdanovic I., Samet H., Graf H. P. Pruning filters for efficient convnets. Preprint arXiv:1608.08710 (2017). | |
dc.relation.references | [24] He Y., Lin J., Liu Z., Wang H., Li L.-J., Han S. AMC: AutoML for model compression and acceleration on mobile devices. Proceedings of the European conference on Computer Vision (ECCV). 815–832 (2018). | |
dc.relation.references | [25] Liu Z., Mu H., Zhang X., Guo Z., Yang X., Cheng K.-T., Sun J. MetaPruning: Meta learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 3295–3304 (2019). | |
dc.relation.references | [26] He Y., Ding Y., Liu P., Zhu L., Zhang H., Yang Y. Learning filter pruning criteria for deep convolutional neural networks acceleration. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2006–2015 (2020). | |
dc.relation.references | [27] Sainath T. N., Kingsbury B., Sindhwani V., Arisoy E., Ramabhadran B. Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 6655–6659 (2013). | |
dc.relation.references | [28] Jaderberg M., Vedaldi A., Zisserman A. Speeding up convolutional neural networks with low rank expansions. Preprint arXiv:1405.3866 (2014). | |
dc.relation.references | [29] Denton E. L., Zaremba W., Bruna J., LeCun Y., Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. Proceedings of the 27th International Conference on Neural Information Processing Systems. 1, 1269–1277 (2014). | |
dc.relation.references | [30] Yin M., Sui Y., Liao S., Yuan B. Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10669–10678 (2021). | |
dc.relation.references | [31] Wu B., Wang D., Zhao G., Deng L., Li G. Hybrid tensor decomposition in neural network compression. Neural Networks. 132, 309–320 (2020). | |
dc.relation.references | [32] Bai Z., Li Y., Wo´zniak M., Zhou M., Li D. DecomVQANet: Decomposing visual question answering deep network via tensor decomposition and regression. Pattern Recognition. 110, 107538 (2021). | |
dc.relation.references | [33] Iandola F. N., Han S., Moskewicz M. W., Ashraf K., Dally W. J., Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. Preprint arXiv:1602.07360 (2016). | |
dc.relation.references | [34] Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L. C. MobileNetV2: Inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4510–4520 (2018). | |
dc.relation.references | [35] Lee D.-H., Liu J.-L. End-to-end deep learning of lane detection and path prediction for real-time autonomous driving. Signal, Image and Video Processing. 17, 199–205 (2022). | |
dc.relation.references | [36] Chollet F. Xception: Deep learning with depthwise separable convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1800–1807 (2017). | |
dc.relation.references | [37] Wu C. W. ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions. Preprint arXiv:1809.02209 (2018). | |
dc.relation.references | [38] Cs´efalvay S., Imber J. Self-Compressing Neural Networks. Preprint arXiv:2301.13142 (2023). | |
dc.relation.references | [39] Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. 9351, 234–241 (2015). | |
dc.relation.references | [40] Beheshti N., Johnsson L. Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1495–1504 (2020). | |
dc.relation.references | [41] Zhang S. H., Li R., Dong X., Rosin P., Cai Z., Han X., Yang D., Huang H., Hu S. M. Pose2Seg: Detection Free Human Instance Segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 889–898 (2019). | |
dc.relation.referencesen | [1] Minaee S., Boykov Y. Y., Porikli F., Plaza A. J., Kehtarnavaz N., Terzopoulos D. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 44 (7), 3523–3542 (2021). | |
dc.relation.referencesen | [2] Hearst M. A., Dumais S. T., Osuna E., Platt J., Scholkopf B. Support vector machines. IEEE Intelligent Systems and their Applications. 13 (4), 18–28 (1998). | |
dc.relation.referencesen | [3] Lahgazi M. J., Hakim A., Argoul P. An adaptive wavelet shrinkage based accumulative frame differencing model for motion segmentation. Mathematical Modeling and Computing. 10 (1), 159–170 (2023). | |
dc.relation.referencesen | [4] Dalal N., Triggs B. Histograms of oriented gradients for human detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). 1, 886–893 (2005). | |
dc.relation.referencesen | [5] Ashok V., Balakumaran T., Gowrishankar C., Vennila I. L. A., Nirmal Kumar A. The Fast Haar Wavelet Transform for Signal & Image Processing. International Journal of Computer Science and Information Security. 7 (2010). | |
dc.relation.referencesen | [6] Ren S., He K., Girshick R., Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. 39 (6), 1137–1149 (2015). | |
dc.relation.referencesen | [7] Redmon J., Divvala S., Girshick R., Farhadi A. You only look once: Unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 779–788 (2016). | |
dc.relation.referencesen | [8] Bochkovskiy A., Wang C.-Y., Liao H.-Y. M. YOLOv4: Optimal Speed and Accuracy of Object Detection. Preprint arXiv:2004.10934 (2020). | |
dc.relation.referencesen | [9] Law H., Deng J. CornerNet: Detecting objects as paired keypoints. Proceedings of the European Conference on Computer Vision (ECCV). 734–750 (2018). | |
dc.relation.referencesen | [10] Bolya D., Zhou C., Xiao F., Lee Y. J. YOLACT: Real-time instance segmentation. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 9156–9165 (2019). | |
dc.relation.referencesen | [11] Pavani G., Biswal B., Gandhi T. K. Multistage DPIRef-Net: An effective network for semantic segmentation of arteries and veins from retinal surface. Neuroscience Informatics. 2 (4), 100074 (2022). | |
dc.relation.referencesen | [12] Biswal B., Geetha P. P., Prasanna T., Karn P. K. Robust segmentation of exudates from retinal surface using M-CapsNet via EM routing. Biomedical Signal Processing and Control. 68, 102770 (2021). | |
dc.relation.referencesen | [13] Xie H.-X., Lin C.-Y., Zheng H., Lin P.-Y. An UNet-based head shoulder segmentation network. 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). 1–2 (2018). | |
dc.relation.referencesen | [14] Wang P., Bai X. Thermal infrared pedestrian segmentation based on conditional GAN. IEEE Transactions on Image Processing. 28 (12), 6007–6021 (2019). | |
dc.relation.referencesen | [15] Baheti B., Innani S., Gajre S., Talbar S. Eff-unet: A novel architecture for semantic segmentation in unstructured environment. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1473–1481 (2020). | |
dc.relation.referencesen | [16] Liu T., Stathaki T. Faster R-CNN for robust pedestrian detection using semantic segmentation network. Frontiers in Neurorobotics. 12, 64 (2018). | |
dc.relation.referencesen | [17] Yuan L., Qiu Z. Mask-RCNN with spatial attention for pedestrian segmentation in cyber-physical systems. Computer Communications. 180, 109–114 (2021). | |
dc.relation.referencesen | [18] Syed A., Morris B. T. CNN, segmentation or semantic embeddings: evaluating scene context for trajectory prediction. International Symposium on Visual Computing. 706–717 (2020). | |
dc.relation.referencesen | [19] Gao G., Gao J., Liu Q., Wang Q., Wang Y. CNN-based density estimation and crowd counting: A survey. Preprint arXiv:2003.12783 (2020). | |
dc.relation.referencesen | [20] Luo J.-H., Zhang H., Zhou H.-Y., Xie C.-W., Wu J., Lin W. ThiNet: pruning CNN filters for a thinner net. IEEE transactions on pattern analysis and machine intelligence. 41 (10), 2525–2538 (2018). | |
dc.relation.referencesen | [21] Reed R. Pruning algorithms-a survey. IEEE Transactions on Neural Networks. 4 (5), 740–747 (1993). | |
dc.relation.referencesen | [22] Han S., Pool J., Tran J., Dally W. Learning both weights and connections for efficient neural network. Proceedings of the 28th International Conference on Neural Information Processing Systems. 1, 1135–1143 (2015). | |
dc.relation.referencesen | [23] Li H., Kadav A., Durdanovic I., Samet H., Graf H. P. Pruning filters for efficient convnets. Preprint arXiv:1608.08710 (2017). | |
dc.relation.referencesen | [24] He Y., Lin J., Liu Z., Wang H., Li L.-J., Han S. AMC: AutoML for model compression and acceleration on mobile devices. Proceedings of the European conference on Computer Vision (ECCV). 815–832 (2018). | |
dc.relation.referencesen | [25] Liu Z., Mu H., Zhang X., Guo Z., Yang X., Cheng K.-T., Sun J. MetaPruning: Meta learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 3295–3304 (2019). | |
dc.relation.referencesen | [26] He Y., Ding Y., Liu P., Zhu L., Zhang H., Yang Y. Learning filter pruning criteria for deep convolutional neural networks acceleration. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2006–2015 (2020). | |
dc.relation.referencesen | [27] Sainath T. N., Kingsbury B., Sindhwani V., Arisoy E., Ramabhadran B. Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 6655–6659 (2013). | |
dc.relation.referencesen | [28] Jaderberg M., Vedaldi A., Zisserman A. Speeding up convolutional neural networks with low rank expansions. Preprint arXiv:1405.3866 (2014). | |
dc.relation.referencesen | [29] Denton E. L., Zaremba W., Bruna J., LeCun Y., Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. Proceedings of the 27th International Conference on Neural Information Processing Systems. 1, 1269–1277 (2014). | |
dc.relation.referencesen | [30] Yin M., Sui Y., Liao S., Yuan B. Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10669–10678 (2021). | |
dc.relation.referencesen | [31] Wu B., Wang D., Zhao G., Deng L., Li G. Hybrid tensor decomposition in neural network compression. Neural Networks. 132, 309–320 (2020). | |
dc.relation.referencesen | [32] Bai Z., Li Y., Wo´zniak M., Zhou M., Li D. DecomVQANet: Decomposing visual question answering deep network via tensor decomposition and regression. Pattern Recognition. 110, 107538 (2021). | |
dc.relation.referencesen | [33] Iandola F. N., Han S., Moskewicz M. W., Ashraf K., Dally W. J., Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. Preprint arXiv:1602.07360 (2016). | |
dc.relation.referencesen | [34] Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L. C. MobileNetV2: Inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4510–4520 (2018). | |
dc.relation.referencesen | [35] Lee D.-H., Liu J.-L. End-to-end deep learning of lane detection and path prediction for real-time autonomous driving. Signal, Image and Video Processing. 17, 199–205 (2022). | |
dc.relation.referencesen | [36] Chollet F. Xception: Deep learning with depthwise separable convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1800–1807 (2017). | |
dc.relation.referencesen | [37] Wu C. W. ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions. Preprint arXiv:1809.02209 (2018). | |
dc.relation.referencesen | [38] Cs´efalvay S., Imber J. Self-Compressing Neural Networks. Preprint arXiv:2301.13142 (2023). | |
dc.relation.referencesen | [39] Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. 9351, 234–241 (2015). | |
dc.relation.referencesen | [40] Beheshti N., Johnsson L. Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1495–1504 (2020). | |
dc.relation.referencesen | [41] Zhang S. H., Li R., Dong X., Rosin P., Cai Z., Han X., Yang D., Huang H., Hu S. M. Pose2Seg: Detection Free Human Instance Segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 889–898 (2019). | |
dc.rights.holder | © Національний університет “Львівська політехніка”, 2023 | |
dc.subject | сегментація пішоходів | |
dc.subject | пропозиції регіону | |
dc.subject | UNet | |
dc.subject | виявлення об’єктів | |
dc.subject | семантична сегментація | |
dc.subject | згорткові нейронні мережі (CNN) | |
dc.subject | pedestrian segmentation | |
dc.subject | region proposals | |
dc.subject | UNet | |
dc.subject | object detection | |
dc.subject | semantic segmentation | |
dc.subject | convolutional neural networks (CNNs) | |
dc.title | Improving pedestrian segmentation using region proposal-based CNN semantic segmentation | |
dc.title.alternative | Покращення сегментації пішоходів за допомогою семантичної сегментації CNN на основі регіональних пропозицій | |
dc.type | Article |
Files
License bundle
1 - 1 of 1