Забезпечення кібербезпеки систем штучного інтелекту: аналіз вразливостей, атак і контрзаходів

dc.citation.epage22
dc.citation.issue12
dc.citation.journalTitleВісник Національного університету "Львівська політехніка" "Інформаційні системи та мережі"
dc.citation.spage7
dc.contributor.affiliationНаціональний аерокосмічний університет ім. М. Є. Жуковського “ХАІ”
dc.contributor.affiliationNational aerospace university “KhAI”
dc.contributor.authorНеретін, Олексій
dc.contributor.authorХарченко, Вячеслав
dc.contributor.authorNeretin, Oleksii
dc.contributor.authorKharchenko, Vyacheslav
dc.coverage.placenameЛьвів
dc.coverage.placenameLviv
dc.date.accessioned2025-03-06T08:06:16Z
dc.date.created2022-02-28
dc.date.issued2022-02-28
dc.description.abstractОстанніми роками багато компаній почали інтегрувати системи штучного інтелекту (СШІ) в свої інфраструктури. СШІ використовують у вразливих сферах суспільства, таких як судова система, критична інфраструктура, відеоспостереження тощо. Це зумовлює необхідність досто- вірного оцінювання і гарантованого забезпечення кібербезпеки СШІ. У дослідженні проаналізовано стан справ щодо кібербезпеки цих систем. Класифіковано можливі типи атак і детально розглянуто основні з них. Проаналізовано загрози і атаки за рівнем тяжкості й оцінено ризики безпеки з використанням методу IMECA. Виявлено, що найвищі ризики небезпеки «Змагальних атак» та атак «Отруєння даних», але контрзаходи щодо них не на належному рівні. Зроблено висновок, що існує потреба в формалізації та стандартизації життєвого циклу розроблення та використання безпечних СШІ. Обґрунтовано напрями подальших досліджень щодо необхідності розроблення методів оцінювання і забезпечення кібербезпеки СШІ, зокрема для систем, які надають штучний інтелект як сервіс.
dc.description.abstractIn recent years, many companies have begun to integrate artificial intelligence systems (AIS) into their infrastructures. AIS is used in sensitive areas of society, such as the judicial system, critical infrastructure, video surveillance, and others. This determines the need for a reliable assessment and guaranteed provision of cyber security of AIS. The study analyzed the state of affairs regarding the cyber security of these systems. Possible types of attacks are classified and the main ones are considered in detail. Threats and attacks were analyzed by level of severity and security risks were assessed using the IMECA method. “Adversarial attacks” and “Data poisoning” attacks are found to have the highest risks of danger, but the countermeasures are not at the appropriate level. It was concluded that there is a need for formalization and standardization of the life cycle of the development and use of secure AIS. The directions of further research regarding the need to develop methods for evaluating and ensuring cyber security of the AIS are substantiated, including for systems that provide AI as a service.
dc.format.extent7-22
dc.format.pages16
dc.identifier.citationНеретін О. Забезпечення кібербезпеки систем штучного інтелекту: аналіз вразливостей, атак і контрзаходів / Олексій Неретін, Вячеслав Харченко // Вісник Національного університету "Львівська політехніка" "Інформаційні системи та мережі". — Львів : Видавництво Львівської політехніки, 2022. — № 12. — С. 7–22.
dc.identifier.citationenNeretin O. Ensurance of artificial intelligence systems cyber security: analysis of vulnerabilities, attacks and countermeasures / Neretin Oleksii, Kharchenko Vyacheslav // Visnyk Natsionalnoho universytetu "Lvivska politekhnika" "Informatsiini systemy ta merezhi". — Lviv : Lviv Politechnic Publishing House, 2022. — No 12. — P. 7–22.
dc.identifier.doidoi.org/10.23939/sisn2022.12.007
dc.identifier.urihttps://ena.lpnu.ua/handle/ntb/63949
dc.language.isouk
dc.publisherВидавництво Львівської політехніки
dc.publisherLviv Politechnic Publishing House
dc.relation.ispartofВісник Національного університету "Львівська політехніка" "Інформаційні системи та мережі", 12, 2022
dc.relation.references1. Herping S. (2019). Securing Artificial Intelligence – Part I. https://www.stiftung-nv.de/sites/default/files/securing_artificial_intelligence.pdf
dc.relation.references2. PwC: The macroeconomic impact of artificial intelligence (2018). https://www.pwc.co.uk/economicservices/assets/macroeconomic-impact-of-ai-technical-report-feb-18.pdf
dc.relation.references3. Comiter M. (2019). Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It. Belfer Center for Science and International Affairs, Harvard Kennedy School. https://www.belfercenter.org/sites/default/files/2019-08/AttackingAI/AttackingAI.pdf
dc.relation.references4. Povolny S. (2020). Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles. McAfee Labs. https://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomousvehicles/
dc.relation.references5. Lohn A. (2020). Hacking AI. Center for Security and Emerging Technology. https://doi.org/ 10.51593/2020CA006
dc.relation.references6. Lohn A. (2021). Poison in the Well. Center for Security and Emerging Technology. https://doi.org/ 10.51593/2020CA013
dc.relation.references7. Ruef M. (2020). Hacking Artificial Intelligence – Influencing and Cases of Manipulation. https://www.researchgate.net/publication/338764153_Hacking_Artificial_Intelligence_-_Influencing_and_Cases_of_Manipulation
dc.relation.references8. Kim A. (2020). The Impact of Platform Vulnerabilities in AI Systems. Massachusetts Institute of Technology. https://dspace.mit.edu/bitstream/handle/1721.1/129159/1227275868-MIT.pdf
dc.relation.references9. Hartmann K., Steup C. (2020). Hacking the AI – the Next Generation of Hijacked Systems. In 12 International Conference on Cyber Conflict (CyCon). https://doi.org/10.23919/CyCon49761.2020.9131724
dc.relation.references10. Bursztein E. (2018). Attacks against machine learning – an overview. Personal Site and Blog featuresing blog posts publications and talks. https://elie.net/blog/ai/attacks-against-machine-learning-an-overview/
dc.relation.references11. Ansah H. (2021). Adversarial Attacks on Neural Networks: Exploring the Fast Gradient Sign Method. Neptune blog. https://neptune.ai/blog/adversarial-attacks-on-neural-networks-exploring-the-fast-gradient-sign-method
dc.relation.references12. Griffin J. (2019). Researchers hack AI video analytics with color printout. https://www.securityinfowatch.com/video-surveillance/video-analytics/article/21080107/researchers-hack-ai-video-analytics-with-color-printout
dc.relation.references13. Thys S., Ranst W. V., Goedemé T. (2019). Fooling automated surveillance cameras: adversarial patches to attack person detection. arXiv preprint arXiv:1904.08653. https://doi.org/10.48550/arXiv.1904.08653
dc.relation.references14. Eykholt K., Evtimov I., Fernandes E., Li B., Rahmati A., Xiao C., Prakash A., Kohno T., Song D. (2018). Robust Physical-World Attacks on Deep Learning Models. arXiv preprint arXiv:1707.08945. https://doi.org/10.48550/arXiv.1707.08945
dc.relation.references15. Eykholt K., Evtimov I., Fernandes E., Li B., Rahmati A., Tramer F., Prakash A., Kohno T., Song D. (2018). Physical Adversarial Examples for Object Detectors. arXiv preprint arXiv:1807.07769. https://doi.org/10.48550/arXiv.1807.07769
dc.relation.references16. Su J., Vargas D. V., Sakurai K. (2019). Attacking convolutional neural network using differential evolution. IPSJ Transactions on Computer Vision and Applications. https://doi.org/10.1186/s41074-019-0053-3
dc.relation.references17. Goodfellow I. J., Shlens J., Szegedy C. (2015). Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572. https://doi.org/10.48550/arXiv.1412.6572
dc.relation.references18. Papernot N., McDaniel P., Goodfellow I. J. (2016). Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arXiv preprint arXiv:1605.07277. https://doi.org/10.48550/arXiv.1605.07277
dc.relation.references19. Catak F. O., Yayilgan S. Y. (2021). Deep Neural Network based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks. In International Conference on Intelligent Technologies and Applications, 280–291. https://doi.org/10.1007/978-3-030-71711-7_23
dc.relation.references20. Volborth M. (2019). Detecting backdoor attacks on artificial neural networks. https://ece.duke.edu/about/news/detecting-backdoor-attacks-artificial-neural-networks
dc.relation.references21. Vincent J. (2020). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
dc.relation.references22. Ji Y., Liu Z., Hu X., Wang P., Zhang Y. (2019). Programmable Neural Network Trojan for Pre-Trained Feature Extractor. arXiv preprint arXiv:1901.07766. https://doi.org/10.48550/arXiv.1901.07766
dc.relation.references23. Yang Z., Iyer N., Reimann J., Virani N. (2019). Design of intentional backdoors in sequential models. arXiv preprint arXiv:1902.09972. https://doi.org/10.48550/arXiv.1902.09972
dc.relation.references24. Gu T., Dolan-Gavitt B., Garg S. (2017). Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733. https://doi.org/10.48550/arXiv.1708.06733
dc.relation.references25. Biggio B., Nelson B., Laskov P. (2013). Poisoning Attacks against Support Vector Machines. arXiv preprint arXiv:1206.6389. https://doi.org/10.48550/arXiv.1206.6389
dc.relation.references26. Jagielski M., Oprea A., Biggio B., Liu C., NitaRotaru C., Li B. (2018). Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP), 19–35. https://doi.org/10.1109/SP.2018.00057
dc.relation.references27. Xiao H., Biggio B., Brown G., Fumera G., Eckert C., Roli F. (2015). Is feature selection secure against training data poisoning? In International Conference on Machine Learning, 1689–1698. https://doi.org/ 10.48550/arXiv.1804.07933
dc.relation.references28. Fredrikson M., Jha S., Ristenpart T. (2015). Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In CCS '15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 1322–1333. https://doi.org/10.1145/2810103.2813677
dc.relation.references29. Shokri R., Stronati M., Song C., Shmatikov V. (2017). Membership Inference Attacks against Machine Learning Models. In the proceedings of the IEEE Symposium on Security and Privacy. https://doi.org/ 10.48550/arXiv.1610.05820
dc.relation.references30. Salem A., Zhang Y., Humbert M., Berrang P., Fritz M., Backes M. (2018). ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. arXiv preprint arXiv:1806.01246. https://doi.org/10.48550/arXiv.1806.01246
dc.relation.references31. Rahman A., Rahman T., Lagani`ere R., Mohammed N., Wang Y. (2018). Membership Inference Attack against Differentially Private Deep Learning Model. https://www.tdp.cat/issues16/tdp.a289a17.pdf
dc.relation.references32. Song L., Shokri R., Mittal P. (2019). Privacy Risks of Securing Machine Learning Models against Adversarial Examples. arXiv preprint arXiv:1905.10291. https://doi.org/10.48550/arXiv.1905.10291
dc.relation.references33. Hayes J., Melis L., Danezis G., De Cristofaro E. (2018). LOGAN: Membership Inference Attacks Against Generative Models. arXiv preprint arXiv:1705.07663. https://doi.org/10.48550/arXiv.1705.07663
dc.relation.references34. Singh P. (2022). Data Leakage in Machine Learning: How it can be detected and minimize the risk. https://towardsdatascience.com/data-leakage-in-machine-learning-how-it-can-be-detected-and-minimize-the-risk8ef4e3a97562
dc.relation.references35. Rakin A. S., He Z., Fan D. (2019). Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search. arXiv preprint arXiv:1903.12269. https://doi.org/10.48550/arXiv.1903.12269
dc.relation.references36. Tramèr F., Zhang F., Juels A., Reiter M. K., Ristenpart T. (2016). Stealing Machine Learning Models via Prediction APIs. Proceedings of the 25th USENIX Security Symposium. https://doi.org/10.48550/arXiv.1609.02943
dc.relation.references37. Bhagoji A. N., Chakraborty S., Mittal P., Calo S. B. (2019). Analyzing Federated Learning through an Adversarial Lens. In Proceedings of the 36th International Conference on Machine Learning, PMLR 97:634–643. http://proceedings.mlr.press/v97/bhagoji19a.html
dc.relation.references38. Androulidakis I., Kharchenko V., Kovalenko A. (2016). IMECA-based Technique for Security Assessment of Private Communications: Technology and Training. https://doi.org/10.11610/isij.3505
dc.relation.references39. Wolff J. (2020). How to improve cybersecurity for artificial intelligence. The Brookings Institution. https://www.brookings.edu/research/how-to-improve-cybersecurity-for-artificial-intelligence/
dc.relation.references40. Newman J. C. (2019). Toward AI Security GLOBAL ASPIRATIONS FOR A MORE RESILIENT FUTURE. https://cltc.berkeley.edu/wp-content/uploads/2019/02/Toward_AI_Security.pdf
dc.relation.references41. National Security Commission on Artificial Intelligence. First Quarter Recommendations. (2020). https://drive.google.com/file/d/1wkPh8Gb5drBrKBg6OhGu5oNaTEERbKss/view
dc.relation.references42. Pupillo L., Fantin S., Ferreira A., Polito C. (2021). Artificial Intelligence and Cybersecurity. CEPS Task Force Report. https://www.ceps.eu/wp-content/uploads/2021/05/CEPS-TFR-Artificial-Intelligence-and-Cybersecurity.pdf
dc.relation.references43. Neustadter D. (2020). Why AI Needs Security. Synopsys Technical Bulletin. https://www.synopsys.com/designware-ip/technical-bulletin/why-ai-needs-security-dwtb-q318.html
dc.relation.references44. Tramèr F., Kurakin A., Papernot N., Goodfellow I., Boneh D., McDaniel P. (2020). Ensemble Adversarial Training: Attacks and Defenses. arXiv preprint arXiv:1705.07204. https://doi.org/10.48550/arXiv.1705.07204
dc.relation.references45. Yuan X., He P., Zhu Q., Li X. (2018). Adversarial Examples: Attacks and Defenses for Deep Learning. arXiv preprint arXiv:1712.07107. https://doi.org/10.48550/arXiv.1712.07107
dc.relation.references46. Dziugaite G. K., Ghahramani Z., Roy D. M. (2016). A study of the effect of JPG compression on adversarial images. arXiv preprint arXiv:1608.00853. https://doi.org/10.48550/arXiv.1608.00853
dc.relation.references47. Papernot N., McDaniel P., Wu X., Jha S., Swami A. (2016). Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP), 582-597. https://doi.org/10.1109/SP.2016.41
dc.relation.references48. Ma S., Liu Y., Tao G., Lee W.C., Zhang X. (2019). NIC: Detecting Adversarial Samples with Neural Network Invariant Checking. In NDSS. https://www.ndss-symposium.org/ndss-paper/nic-detecting-adversarialsamples-with-neural-network-invariant-checking/
dc.relation.references49. Xu W., Evans D., Qi Y. (2018). Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Network and Distributed Systems Security Symposium (NDSS). https://doi.org/10.14722/ndss.2018.23198
dc.relation.references50. Liu C., Li B., Vorobeychik Y., Oprea A. (2017). Robust linear regression against training data poisoning. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 91–102. https://doi.org/10.1145/3128572.3140447
dc.relation.references51. Kharchenko V., Fesenko H., Illiashenko O. (2022). Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach, Development and Application. https://doi.org/10.3390/s22134865
dc.relation.references52. Kharchenko V., Fesenko H., Illiashenko O. (2022). Basic model of non-functional characteristics for assessment of artificial intelligence quality. Radioelectronic and computer systems. https://doi.org/10.32620/reks.2022.2.11
dc.relation.references53. Janbi N., Katib I., Albeshri A., Mehmood R. (2020). Distributed Artificial Intelligence-as-a-Service (DAIaaS) for Smarter IoE and 6G Environments. https://doi.org/10.3390/s20205796
dc.relation.referencesen1. Herping, S. (2019). Securing Artificial Intelligence – Part I. https://www.stiftung-nv.de/sites/default/files/securing_artificial_intelligence.pdf
dc.relation.referencesen2. PwC: The macroeconomic impact of artificial intelligence. (2018). https://www.pwc.co.uk/economicservices/assets/macroeconomic-impact-of-ai-technical-report-feb-18.pdf
dc.relation.referencesen3. Comiter, M. (2019). Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It. Belfer Center for Science and International Affairs, Harvard Kennedy School. https://www.belfercenter.org/sites/default/files/2019-08/AttackingAI/AttackingAI.pdf
dc.relation.referencesen4. Povolny, S. (2020). Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles. McAfee Labs. https://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomousvehicles/
dc.relation.referencesen5. Lohn, A. (2020). Hacking AI. Center for Security and Emerging Technology. https://doi.org/ 10.51593/2020CA006
dc.relation.referencesen6. Lohn, A. (2021). Poison in the Well. Center for Security and Emerging Technology. https://doi.org/ 10.51593/2020CA013
dc.relation.referencesen7. Ruef, M. (2020). Hacking Artificial Intelligence – Influencing and Cases of Manipulation. https://www.researchgate.net/publication/338764153_Hacking_Artificial_Intelligence_-_Influencing_and_Cases_of_Manipulation
dc.relation.referencesen8. Kim, A. (2020). The Impact of Platform Vulnerabilities in AI Systems. Massachusetts Institute of Technology. https://dspace.mit.edu/bitstream/handle/1721.1/129159/1227275868-MIT.pdf
dc.relation.referencesen9. Hartmann, K., & Steup, C. (2020). Hacking the AI – the Next Generation of Hijacked Systems. In 12 International Conference on Cyber Conflict (CyCon). https://doi.org/10.23919/CyCon49761.2020.9131724
dc.relation.referencesen10. Bursztein, E. (2018). Attacks against machine learning – an overview. Personal Site and Blog featuresing blog posts publications and talks. https://elie.net/blog/ai/attacks-against-machine-learning-an-overview/
dc.relation.referencesen11. Ansah, H. (2021). Adversarial Attacks on Neural Networks: Exploring the Fast Gradient Sign Method. Neptune blog. https://neptune.ai/blog/adversarial-attacks-on-neural-networks-exploring-the-fast-gradient-sign-method
dc.relation.referencesen12. Griffin, J. (2019). Researchers hack AI video analytics with color printout. https://www.securityinfowatch.com/video-surveillance/video-analytics/article/21080107/researchers-hack-ai-video-analytics-with-color-printout
dc.relation.referencesen13. Thys, S., Ranst, W. V., & Goedemé, T. (2019). Fooling automated surveillance cameras: adversarial patches to attack person detection. arXiv preprint arXiv:1904.08653. https://doi.org/10.48550/arXiv.1904.08653
dc.relation.referencesen14. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust Physical-World Attacks on Deep Learning Models. arXiv preprint arXiv:1707.08945. https://doi.org/10.48550/arXiv.1707.08945
dc.relation.referencesen15. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Tramer, F., Prakash, A., Kohno, T., & Song, D. (2018). Physical Adversarial Examples for Object Detectors. arXiv preprint arXiv:1807.07769. https://doi.org/10.48550/arXiv.1807.07769
dc.relation.referencesen16. Su, J., Vargas, D. V., & Sakurai, K. (2019). Attacking convolutional neural network using differential evolution. IPSJ Transactions on Computer Vision and Applications. https://doi.org/10.1186/s41074-019-0053-3
dc.relation.referencesen17. Goodfellow, I.J., Shlens, J., & Szegedy, C. (2015). Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572. https://doi.org/10.48550/arXiv.1412.6572
dc.relation.referencesen18. Papernot, N., McDaniel, P., & Goodfellow, I.J. (2016). Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arXiv preprint arXiv:1605.07277. https://doi.org/ 10.48550/arXiv.1605.07277
dc.relation.referencesen19. Catak, F.O., & Yayilgan, S.Y. (2021). Deep Neural Network based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks. In International Conference on Intelligent Technologies and Applications, 280-291. https://doi.org/10.1007/978-3-030-71711-7_23
dc.relation.referencesen20. Volborth, M. (2019). Detecting backdoor attacks on artificial neural networks. https://ece.duke.edu/about/news/detecting-backdoor-attacks-artificial-neural-networks
dc.relation.referencesen21. Vincent, J. (2020). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
dc.relation.referencesen22. Ji, Y., Liu, Z., Hu, X., Wang, P., & Zhang, Y. (2019). Programmable Neural Network Trojan for PreTrained Feature Extractor. arXiv preprint arXiv:1901.07766. https://doi.org/10.48550/arXiv.1901.07766
dc.relation.referencesen23. Yang, Z., Iyer, N., Reimann, J., & Virani, N. (2019). Design of intentional backdoors in sequential models. arXiv preprint arXiv:1902.09972. https://doi.org/10.48550/arXiv.1902.09972
dc.relation.referencesen24. Gu, T., Dolan-Gavitt, B., & Garg, S. (2017). Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733. https://doi.org/10.48550/arXiv.1708.06733
dc.relation.referencesen25. Biggio, B., Nelson, B., & Laskov, P. (2013). Poisoning Attacks against Support Vector Machines. arXiv preprint arXiv:1206.6389. https://doi.org/10.48550/arXiv.1206.6389
dc.relation.referencesen26. Jagielski, M., Oprea, A., Biggio, B., Liu, C., NitaRotaru, C., & Li, B. (2018). Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP), 19–35. https://doi.org/10.1109/SP.2018.00057
dc.relation.referencesen27. Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C., & Roli, F. (2015). Is feature selection secure against training data poisoning? In International Conference on Machine Learning, 1689–1698. https://doi.org/ 10.48550/arXiv.1804.07933
dc.relation.referencesen28. Fredrikson, M., Jha, S., & Ristenpart, T. (2015). Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In CCS '15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 1322–1333. https://doi.org/10.1145/2810103.2813677
dc.relation.referencesen29. Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership Inference Attacks against Machine Learning Models. In the proceedings of the IEEE Symposium on Security and Privacy. https://doi.org/ 10.48550/arXiv.1610.05820
dc.relation.referencesen30. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., & Backes, M. (2018). ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. arXiv preprint arXiv:1806.01246. https://doi.org/10.48550/arXiv.1806.01246
dc.relation.referencesen31. Rahman, A., Rahman, T., Lagani`ere, R., Mohammed, N., & Wang, Y. (2018). Membership Inference Attack against Differentially Private Deep Learning Model. https://www.tdp.cat/issues16/tdp.a289a17.pdf
dc.relation.referencesen32. Song, L., Shokri, R., & Mittal, P. (2019). Privacy Risks of Securing Machine Learning Models against Adversarial Examples. arXiv preprint arXiv:1905.10291. https://doi.org/10.48550/arXiv.1905.10291
dc.relation.referencesen33. Hayes, J., Melis, L., Danezis, G., & De Cristofaro, E. (2018). LOGAN: Membership Inference Attacks Against Generative Models. arXiv preprint arXiv:1705.07663. https://doi.org/10.48550/arXiv.1705.07663
dc.relation.referencesen34. Singh, P. (2022). Data Leakage in Machine Learning: How it can be detected and minimize the risk. https://towardsdatascience.com/data-leakage-in-machine-learning-how-it-can-be-detected-and-minimize-the-risk8ef4e3a97562
dc.relation.referencesen35. Rakin, A. S., He, Z., & Fan, D. (2019). Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search. arXiv preprint arXiv:1903.12269. https://doi.org/10.48550/arXiv.1903.12269
dc.relation.referencesen36. Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing Machine Learning Models via Prediction APIs. Proceedings of the 25th USENIX Security Symposium. https://doi.org/10.48550/arXiv.1609.02943
dc.relation.referencesen37. Bhagoji, A. N., Chakraborty, S., Mittal, P., & Calo, S. B. (2019). Analyzing Federated Learning through an Adversarial Lens. In Proceedings of the 36th International Conference on Machine Learning, PMLR 97:634-643. http://proceedings.mlr.press/v97/bhagoji19a.html
dc.relation.referencesen38. Androulidakis, I., Kharchenko, V., & Kovalenko, A. (2016). IMECA-based Technique for Security Assessment of Private Communications: Technology and Training. https://doi.org/10.11610/isij.3505
dc.relation.referencesen39. Wolff, J. (2020). How to improve cybersecurity for artificial intelligence. The Brookings Institution. https://www.brookings.edu/research/how-to-improve-cybersecurity-for-artificial-intelligence/
dc.relation.referencesen40. Newman, J. C. (2019). Toward AI Security GLOBAL ASPIRATIONS FOR A MORE RESILIENT FUTURE. https://cltc.berkeley.edu/wp-content/uploads/2019/02/Toward_AI_Security.pdf
dc.relation.referencesen41. National Security Commission on Artificial Intelligence. First Quarter Recommendations (2020). https://drive.google.com/file/d/1wkPh8Gb5drBrKBg6OhGu5oNaTEERbKss/view
dc.relation.referencesen42. Pupillo, L., Fantin, S., Ferreira, A., & Polito, C. (2021). Artificial Intelligence and Cybersecurity. CEPS Task Force Report. https://www.ceps.eu/wp-content/uploads/2021/05/CEPS-TFR-Artificial-Intelligence-and-Cybersecurity.pdf
dc.relation.referencesen43. Neustadter, D. (2020). Why AI Needs Security. Synopsys Technical Bulletin. https://www.synopsys.com/designware-ip/technical-bulletin/why-ai-needs-security-dwtb-q318.html
dc.relation.referencesen44. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2020). Ensemble Adversarial Training: Attacks and Defenses. arXiv preprint arXiv:1705.07204. https://doi.org/10.48550/arXiv.1705.07204
dc.relation.referencesen45. Yuan, X., He, P., Zhu, Q., & Li, X. (2018). Adversarial Examples: Attacks and Defenses for Deep Learning. arXiv preprint arXiv:1712.07107. https://doi.org/10.48550/arXiv.1712.07107
dc.relation.referencesen46. Dziugaite, G. K., Ghahramani, Z., & Roy, D. M. (2016). A study of the effect of JPG compression on adversarial images. arXiv preprint arXiv:1608.00853. https://doi.org/10.48550/arXiv.1608.00853
dc.relation.referencesen47. Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP), 582–597. https://doi.org/10.1109/SP.2016.41
dc.relation.referencesen48. Ma, S., Liu, Y., Tao, G., Lee, W.C., & Zhang, X. (2019). NIC: Detecting Adversarial Samples with Neural Network Invariant Checking. In NDSS. https://www.ndss-symposium.org/ndss-paper/nic-detecting-adversarialsamples-with-neural-network-invariant-checking/
dc.relation.referencesen49. Xu, W., Evans, D., & Qi, Y. (2018). Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Network and Distributed Systems Security Symposium (NDSS). https://doi.org/10.14722/ndss.2018.23198
dc.relation.referencesen50. Liu, C., Li, B., Vorobeychik, Y., & Oprea, A. (2017). Robust linear regression against training data poisoning. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 91–102. https://doi.org/10.1145/3128572.3140447
dc.relation.referencesen51. Kharchenko, V., Fesenko, H., & Illiashenko, O. (2022). Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach, Development and Application. https://doi.org/10.3390/s22134865
dc.relation.referencesen52. Kharchenko, V., Fesenko, H., & Illiashenko, O. (2022). Basic model of non-functional characteristics for assessment of artificial intelligence quality. Radioelectronic and computer systems. https://doi.org/10.32620/reks.2022.2.11
dc.relation.referencesen53. Janbi, N., Katib, I., Albeshri, A., & Mehmood, R. (2020). Distributed Artificial Intelligence-as-a-Service(DAIaaS) for Smarter IoE and 6G Environments. https://doi.org/10.3390/s20205796
dc.relation.urihttps://www.stiftung-nv.de/sites/default/files/securing_artificial_intelligence.pdf
dc.relation.urihttps://www.pwc.co.uk/economicservices/assets/macroeconomic-impact-of-ai-technical-report-feb-18.pdf
dc.relation.urihttps://www.belfercenter.org/sites/default/files/2019-08/AttackingAI/AttackingAI.pdf
dc.relation.urihttps://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomousvehicles/
dc.relation.urihttps://doi.org/
dc.relation.urihttps://www.researchgate.net/publication/338764153_Hacking_Artificial_Intelligence_-_Influencing_and_Cases_of_Manipulation
dc.relation.urihttps://dspace.mit.edu/bitstream/handle/1721.1/129159/1227275868-MIT.pdf
dc.relation.urihttps://doi.org/10.23919/CyCon49761.2020.9131724
dc.relation.urihttps://elie.net/blog/ai/attacks-against-machine-learning-an-overview/
dc.relation.urihttps://neptune.ai/blog/adversarial-attacks-on-neural-networks-exploring-the-fast-gradient-sign-method
dc.relation.urihttps://www.securityinfowatch.com/video-surveillance/video-analytics/article/21080107/researchers-hack-ai-video-analytics-with-color-printout
dc.relation.urihttps://doi.org/10.48550/arXiv.1904.08653
dc.relation.urihttps://doi.org/10.48550/arXiv.1707.08945
dc.relation.urihttps://doi.org/10.48550/arXiv.1807.07769
dc.relation.urihttps://doi.org/10.1186/s41074-019-0053-3
dc.relation.urihttps://doi.org/10.48550/arXiv.1412.6572
dc.relation.urihttps://doi.org/10.48550/arXiv.1605.07277
dc.relation.urihttps://doi.org/10.1007/978-3-030-71711-7_23
dc.relation.urihttps://ece.duke.edu/about/news/detecting-backdoor-attacks-artificial-neural-networks
dc.relation.urihttps://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
dc.relation.urihttps://doi.org/10.48550/arXiv.1901.07766
dc.relation.urihttps://doi.org/10.48550/arXiv.1902.09972
dc.relation.urihttps://doi.org/10.48550/arXiv.1708.06733
dc.relation.urihttps://doi.org/10.48550/arXiv.1206.6389
dc.relation.urihttps://doi.org/10.1109/SP.2018.00057
dc.relation.urihttps://doi.org/10.1145/2810103.2813677
dc.relation.urihttps://doi.org/10.48550/arXiv.1806.01246
dc.relation.urihttps://www.tdp.cat/issues16/tdp.a289a17.pdf
dc.relation.urihttps://doi.org/10.48550/arXiv.1905.10291
dc.relation.urihttps://doi.org/10.48550/arXiv.1705.07663
dc.relation.urihttps://towardsdatascience.com/data-leakage-in-machine-learning-how-it-can-be-detected-and-minimize-the-risk8ef4e3a97562
dc.relation.urihttps://doi.org/10.48550/arXiv.1903.12269
dc.relation.urihttps://doi.org/10.48550/arXiv.1609.02943
dc.relation.urihttp://proceedings.mlr.press/v97/bhagoji19a.html
dc.relation.urihttps://doi.org/10.11610/isij.3505
dc.relation.urihttps://www.brookings.edu/research/how-to-improve-cybersecurity-for-artificial-intelligence/
dc.relation.urihttps://cltc.berkeley.edu/wp-content/uploads/2019/02/Toward_AI_Security.pdf
dc.relation.urihttps://drive.google.com/file/d/1wkPh8Gb5drBrKBg6OhGu5oNaTEERbKss/view
dc.relation.urihttps://www.ceps.eu/wp-content/uploads/2021/05/CEPS-TFR-Artificial-Intelligence-and-Cybersecurity.pdf
dc.relation.urihttps://www.synopsys.com/designware-ip/technical-bulletin/why-ai-needs-security-dwtb-q318.html
dc.relation.urihttps://doi.org/10.48550/arXiv.1705.07204
dc.relation.urihttps://doi.org/10.48550/arXiv.1712.07107
dc.relation.urihttps://doi.org/10.48550/arXiv.1608.00853
dc.relation.urihttps://doi.org/10.1109/SP.2016.41
dc.relation.urihttps://www.ndss-symposium.org/ndss-paper/nic-detecting-adversarialsamples-with-neural-network-invariant-checking/
dc.relation.urihttps://doi.org/10.14722/ndss.2018.23198
dc.relation.urihttps://doi.org/10.1145/3128572.3140447
dc.relation.urihttps://doi.org/10.3390/s22134865
dc.relation.urihttps://doi.org/10.32620/reks.2022.2.11
dc.relation.urihttps://doi.org/10.3390/s20205796
dc.rights.holder© Національний університет “Львівська політехніка”, 2022
dc.rights.holder© Неретін О., Харченко В., 2022
dc.subjectштучний інтелект
dc.subjectкібербезпека
dc.subjectзмагальні атаки
dc.subjectотруєння і витік даних
dc.subjectтроянські атаки
dc.subjectатаки на модель
dc.subjectкрадіжки и отруєння моделей
dc.subjectконтрзаходи
dc.subjectartificial intelligence
dc.subjectcyber security
dc.subjectadversarial attacks
dc.subjectpoisoning and data leakage
dc.subjecttrojan attacks
dc.subjectmodel attacks
dc.subjectmodel theft and poisoning
dc.subjectcountermeasures
dc.subject.udc004.89
dc.titleЗабезпечення кібербезпеки систем штучного інтелекту: аналіз вразливостей, атак і контрзаходів
dc.title.alternativeEnsurance of artificial intelligence systems cyber security: analysis of vulnerabilities, attacks and countermeasures
dc.typeArticle

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
2022n12_Neretin_O-Ensurance_of_artificial_intelligence_7-22.pdf
Size:
1.72 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
2022n12_Neretin_O-Ensurance_of_artificial_intelligence_7-22__COVER.png
Size:
411.81 KB
Format:
Portable Network Graphics

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.83 KB
Format:
Plain Text
Description: