Speech Models Training Technologies Comparison Using Word Error Rate

dc.citation.epage80
dc.citation.issue1
dc.citation.spage74
dc.contributor.affiliationLviv Polytechnic National University
dc.contributor.authorYakubovskyi, Roman
dc.contributor.authorMorozov, Yuriy
dc.coverage.placenameЛьвів
dc.coverage.placenameLviv
dc.date.accessioned2024-02-13T09:56:53Z
dc.date.available2024-02-13T09:56:53Z
dc.date.created2023-02-28
dc.date.issued2023-02-28
dc.description.abstractThe main purpose of this work is to analyze and compare several technologies used for training speech models, including traditional approaches as Hidden Markov Models (HMMs) and more recent methods as Deep Neural Networks (DNNs). The technologies have been explained and compared using word error rate metric based on the input of 1000 words by a user with 15 decibel background noise. Word error rate metric has been explained and calculated. Potential replacements for compared technologies have been provided, including: Attention-based, Generative, Sparse and Quantum-inspired models. Pros and cons of those techniques as a potential replacement have been analyzed and listed. Data analyzing tools and methods have been explained and most common datasets used for HMM and DNN technologies have been described. Real life usage examples of both methods have been provided and systems based on them have been analyzed.
dc.format.extent74-80
dc.format.pages7
dc.identifier.citationYakubovskyi R. Speech Models Training Technologies Comparison Using Word Error Rate / Roman Yakubovskyi, Yuriy Morozov // Advances in Cyber-Physical Systems. — Lviv : Lviv Politechnic Publishing House, 2023. — Vol 8. — No 1. — P. 74–80.
dc.identifier.citationenYakubovskyi R. Speech Models Training Technologies Comparison Using Word Error Rate / Roman Yakubovskyi, Yuriy Morozov // Advances in Cyber-Physical Systems. — Lviv : Lviv Politechnic Publishing House, 2023. — Vol 8. — No 1. — P. 74–80.
dc.identifier.doidoi.org/10.23939/acps2023.01.074
dc.identifier.urihttps://ena.lpnu.ua/handle/ntb/61321
dc.language.isoen
dc.publisherВидавництво Львівської політехніки
dc.publisherLviv Politechnic Publishing House
dc.relation.ispartofAdvances in Cyber-Physical Systems, 1 (8), 2023
dc.relation.referencesBorovets D., Pavych T., Paramud Y., (2021). Computer System for Converting Gestures to Text and Audio Mes- sages. Advances in Cyber-Physical Systems. vol. 6, num. 2. Pp. 90—97. DOI: https://doi.org/10.23939/acps2021.02.090
dc.relation.referencesEmiru E. D., Li Y., Xiong S., Fesseha A., (2019). Speech recognition system based on deep neural network acous- tic modeling for low resourced language-Amharic. ICTCE '19: Proceedings of the 3rd International Confer- ence on Telecommunications and Communication Engi- neering. [Online]. Pp. 141—145. DOI: https://dl.acm.org/doi/10.1145/3369555.3369564#sec-terms
dc.relation.referencesTanaka T., Masumura R., Moriya T., Oba T., Aono Y., (2019). A Joint End-to-End and DNN-HMM Hybrid Automatic Speech Recognition System with Transferring Sharable Knowledge. NTT Media Intelligence Laborato- ries, NTT Corporation. [Online]. Pp. 2210—2214. DOI: http://dx.doi.org/10.21437/Interspeech.2019-226
dc.relation.referencesShanin I., (2019). Emotion Recognition based on Third- Order Circular Suprasegmental Hidden Markov Model. 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT). [Online]. Pp. 800—805. DOI: https://doi.org/10.1109/ICASSP.2019.8683172
dc.relation.referencesDutta A., Ashishkumar G., Rama Rao C. V., (2021). Performance analysis of ASR system in hybrid DNN- HMM framework using a PWL euclidean activation function. Frontiers of Computer Science. [Online]. Pp. 2095—2236. DOI: https://doi.org/10.1007/s11704-020-9419-z
dc.relation.referencesWang L., Hasegawa-Johnson M., (2020). A DNN-HMM- DNN Hybrid Model for Discovering Word-Like Units from Spoken Captions and Image Regions. Proc. Inter- speech 2020. [Online]. Pp. 1456—1460. DOI: https://doi.org/10.21437/Interspeech.2020-1148
dc.relation.referencesLiu X., Sahidullah M., Kinnunen T., (2021). Learnable MFCCs for Speaker Verification. 2021 IEEE Interna- tional Symposium on Circuits and Systems (ISCAS). [Online]. Pp. 1456—1460. DOI:http://dx.doi.org/10.21437/Interspeech.2020-1148
dc.relation.referencesDelić V., Perić Z., Sečujski M., Jakovljević N., Nikolić J., Mišković D., Simić N., Suzić S., Delić T., (2019). Speech technology progress based on new machine learn- ing paradigm. Computational Intelligence and Neurosci- ence. [Online]. Pp. 1687—1706. DOI: https://doi.org/10.1155/2019/4368036
dc.relation.referencesJoshi B., Kumar Sharma A., Singh Yadav N., Tiwari S., (2021). DNN based approach to classify Covid’19 using convolutional neural network and transfer learning. International Journal of Computers and Applications. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/1206212X.2021.1983289 (Accessed 02/18/2022)
dc.relation.referencesZhao Y., (2021). Research on Management Model Based on Deep Learning. Complexity. [Online]. Available: https://www.hindawi.com/journals/complexity/2021/9997662/ (Accessed 02/18/2022)
dc.relation.referencesenBorovets D., Pavych T., Paramud Y., (2021). Computer System for Converting Gestures to Text and Audio Mes- sages. Advances in Cyber-Physical Systems. vol. 6, num. 2. Pp. 90-97. DOI: https://doi.org/10.23939/acps2021.02.090
dc.relation.referencesenEmiru E. D., Li Y., Xiong S., Fesseha A., (2019). Speech recognition system based on deep neural network acous- tic modeling for low resourced language-Amharic. ICTCE '19: Proceedings of the 3rd International Confer- ence on Telecommunications and Communication Engi- neering. [Online]. Pp. 141-145. DOI: https://dl.acm.org/doi/10.1145/3369555.3369564#sec-terms
dc.relation.referencesenTanaka T., Masumura R., Moriya T., Oba T., Aono Y., (2019). A Joint End-to-End and DNN-HMM Hybrid Automatic Speech Recognition System with Transferring Sharable Knowledge. NTT Media Intelligence Laborato- ries, NTT Corporation. [Online]. Pp. 2210-2214. DOI: http://dx.doi.org/10.21437/Interspeech.2019-226
dc.relation.referencesenShanin I., (2019). Emotion Recognition based on Third- Order Circular Suprasegmental Hidden Markov Model. 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT). [Online]. Pp. 800-805. DOI: https://doi.org/10.1109/ICASSP.2019.8683172
dc.relation.referencesenDutta A., Ashishkumar G., Rama Rao C. V., (2021). Performance analysis of ASR system in hybrid DNN- HMM framework using a PWL euclidean activation function. Frontiers of Computer Science. [Online]. Pp. 2095-2236. DOI: https://doi.org/10.1007/s11704-020-9419-z
dc.relation.referencesenWang L., Hasegawa-Johnson M., (2020). A DNN-HMM- DNN Hybrid Model for Discovering Word-Like Units from Spoken Captions and Image Regions. Proc. Inter- speech 2020. [Online]. Pp. 1456-1460. DOI: https://doi.org/10.21437/Interspeech.2020-1148
dc.relation.referencesenLiu X., Sahidullah M., Kinnunen T., (2021). Learnable MFCCs for Speaker Verification. 2021 IEEE Interna- tional Symposium on Circuits and Systems (ISCAS). [Online]. Pp. 1456-1460. DOI:http://dx.doi.org/10.21437/Interspeech.2020-1148
dc.relation.referencesenDelić V., Perić Z., Sečujski M., Jakovljević N., Nikolić J., Mišković D., Simić N., Suzić S., Delić T., (2019). Speech technology progress based on new machine learn- ing paradigm. Computational Intelligence and Neurosci- ence. [Online]. Pp. 1687-1706. DOI: https://doi.org/10.1155/2019/4368036
dc.relation.referencesenJoshi B., Kumar Sharma A., Singh Yadav N., Tiwari S., (2021). DNN based approach to classify Covid’19 using convolutional neural network and transfer learning. International Journal of Computers and Applications. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/1206212X.2021.1983289 (Accessed 02/18/2022)
dc.relation.referencesenZhao Y., (2021). Research on Management Model Based on Deep Learning. Complexity. [Online]. Available: https://www.hindawi.com/journals/complexity/2021/9997662/ (Accessed 02/18/2022)
dc.relation.urihttps://doi.org/10.23939/acps2021.02.090
dc.relation.urihttps://dl.acm.org/doi/10.1145/3369555.3369564#sec-terms
dc.relation.urihttp://dx.doi.org/10.21437/Interspeech.2019-226
dc.relation.urihttps://doi.org/10.1109/ICASSP.2019.8683172
dc.relation.urihttps://doi.org/10.1007/s11704-020-9419-z
dc.relation.urihttps://doi.org/10.21437/Interspeech.2020-1148
dc.relation.urihttp://dx.doi.org/10.21437/Interspeech.2020-1148
dc.relation.urihttps://doi.org/10.1155/2019/4368036
dc.relation.urihttps://www.tandfonline.com/doi/abs/10.1080/1206212X.2021.1983289
dc.relation.urihttps://www.hindawi.com/journals/complexity/2021/9997662/
dc.rights.holder© Національний університет “Львівська політехніка”, 2023
dc.rights.holder© Yakubovskyi R., Morozov Y., 2023
dc.subjectvoice recognition
dc.subjectHMM
dc.subjectDNN
dc.subjectdataset
dc.titleSpeech Models Training Technologies Comparison Using Word Error Rate
dc.typeArticle

Files

Original bundle

Now showing 1 - 2 of 2
Thumbnail Image
Name:
2023v8n1_Yakubovskyi_R-Speech_Models_Training_74-80.pdf
Size:
413.04 KB
Format:
Adobe Portable Document Format
Thumbnail Image
Name:
2023v8n1_Yakubovskyi_R-Speech_Models_Training_74-80__COVER.png
Size:
538.47 KB
Format:
Portable Network Graphics

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.76 KB
Format:
Plain Text
Description: