Legal Analysis of Algorithmic Deception in the Age of Artificial Intelligence: Challenges and Solutions

Document Type : Original Article

Author

Assistant Professor of Private Law, The Research Group of Islamic Jurisprudence and Law, Institute for Islamic Studies in Humanities, Ferdowsi University of Mashhad, Mashhad, Iran

10.22034/jstp.2025.11887.1854

Abstract

The rapid advancement of artificial intelligence (AI) technology in the modern era has created emerging legal issues and challenges. Among the most important of these challenges is the issue of AI deception, and consequently, the production and dissemination of misinformation.This phenomenon is not only a side effect of technological evolution but also indicates a fundamental shift in the dynamics of information dissemination and trust in society. Therefore, addressing this issue is essential because intelligent deception can pose a serious threat to national security, social stability, and civil rights. In light of this necessity, the present study, using a descriptive-analytical approach and comparative study, examines the dimensions of deception in AI. This research, while identifying existing legal challenges, also analyzes appropriate solutions to address this phenomenon. In this regard, the study begins by entering the theoretical foundations and explaining the nature of deception in AI, across four main areas.Then, the mechanism of algorithmic misinformation is structurally analyzed. The next step addresses the legal challenges in dealing with intelligent deception, and finally, legal solutions are presented to counter it. The results of this study reveal that legal systems, especially those with weaker legal infrastructures, face serious challenges and shortcomings in dealing with the emerging complexities of algorithmic deception. These shortcomings mainly stem from the inadequate adaptability of existing laws to rapid technological developments. In such circumstances,effectively addressing the challenges of AI deception requires intelligent utilization of existing legal capacities and a dynamic interpretation of general laws that can be extended. Furthermore, the need to design an innovative and flexible regulatory system, along with the creation of legal frameworks commensurate with the dynamic nature of new technologies, is increasingly felt. This approach can, while addressing immediate needs, provide the necessary platform for the gradual evolution of the Iranian legal system in this area.
 

Keywords

Main Subjects


[1] Fischer, K. A. (2023). Reflective Linguistic Programming (RLP): A Steppingstone In Socially-Aware AGI (Socialagi) (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2305.12647
[2] Sodian, B., Taylor, C., Harris, P. L., & Perner, J. (1991). Early Deception And The Child’s Theory Of Mind: False Trails And Genuine Markers. Child Development, 62(3), 468. https://doi.org/10.2307/1131124
[3] Chandler, M., Fritz, A. S., & Hala, S. (1989). Small-Scale Deceit: Deception As A Marker Of Two-, Three-, And Four-Year-Olds’ Early Theories Of Mind. Child Development, 60(6), 1263. https://doi.org/10.2307/1130919
[4] Fatemi, M. Y., Suttle, W. A., & Sadler, B. M. (2024). Deceptive Path Planning Via Reinforcement Learning With Graph Neural Networks (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2402.06552
[5] Liu, Z., Yang, Y., Miller, T., & Masters, P. (2021). Deceptive Reinforcement Learning For Privacy-Preserving Planning. arXiv.  https://doi.org/10.48550/ARXIV.2102.03022
[6] Guo, L. (2024). Unmasking The Shadows Of AI: Investigating Deceptive Capabilities In Large Language Models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2403.09676
[7] Bakhtin, A., Brown, N., Dinan, E., Farina, G., Flaherty, C., Fried, D., Goff, A., Gray, J., Hu, H., Jacob, A. P., Komeili, M., Konath, K., Kwon, M., Lerer, A., Lewis, M., Miller, A. H., Mitts, S., Renduchintala, A., Roller, S., … Zijlstra, M. (2022). Human-Level Play In The Game Of Diplomacy By Combining Language Models With Strategic Reasoning. Science, 378(6624), 1067–1074. https://doi.org/10.1126/science.ade9097
[8] Hendrycks, D., Mazeika, M., & Woodside, T. (2023). an Overview of Catastrophic AI Risks (Version 6). arXiv. https://doi.org/10.48550/ARXIV.2306.12001
[9] Dogra, A., Pillutla, K., Deshpande, A., Sai, A. B., Nay, J., Rajpurohit, T., Kalyan, A., & Ravindran, B. (2024). Deception In Reinforced Autonomous Agents (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2405.04325
[10] Ward, F. R., Belardinelli, F., Toni, F., & Everitt, T. (2023). Honesty Is The Best Policy: Defining And Mitigating AI Deception (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2312.01350
[11] Zhong, H., O’Neill, E., & Hoffmann, J. A. (2024). Regulating AI: Applying Insights From Behavioural Economics And Psychology To The Application Of Article 5 Of The EU AI Act. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20001–20009. https://doi.org/10.1609/aaai.v38i18.29977
[12] Alanazi, S., Asif, S., & Moulitsas, I. (2024a). Examining The Societal Impact And Legislative Requirements Of Deepfake Technology: A Comprehensive Study. International Journal of Social Science and Humanity. https://doi.org/10.18178/ijssh.2024.14.2.1194
[13] Faraoni, S. (2023). Persuasive Technology And Computational Manipulation: Hypernudging Out Of Mental Self-Determination. Frontiers in Artificial Intelligence, 6, 1216340. https://doi.org/10.3389/frai.2023.1216340
[14] Farid, H. (2022). Creating, Using, Misusing, And Detecting Deep Fakes. Journal of Online Trust and Safety, 1(4). https://doi.org/10.54501/jots.v1i4.56
[15] Castelfranchi, C. & Yao-Hua Tan. (2001). The Role Of Trust And Deception In Virtual Societies. Proceedings Of The 34th Annual Hawaii International Conference on System Sciences, 8. https://doi.org/10.1109/HICSS.2001.927042
[16] Li, L., Ma, H., Kulkarni, A. N., & Fu, J. (2023). Dynamic Hypergames For Synthesis Of Deceptive Strategies With Temporal Logic Objectives. IEEE Transactions on Automation Science and Engineering, 20(1), 334–345. https://doi.org/10.1109/TASE.2022.3150167
[17] Savas, Y., Verginis, C. K., & Topcu, U. (2022). Deceptive Decision-Making Under Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5332–5340. https://doi.org/10.1609/aaai.v36i5.20470
[18] Sarkadi, Ş., Panisson, A. R., Bordini, R. H., McBurney, P., Parsons, S., & Chapman, M. (2019). Modelling Deception Using Theory Of Mind In Multi-Agent Systems. AI Communications, 32(4), 287–302. https://doi.org/10.3233/AIC-190615
[19] Hamann, H., Khaluf, Y., Botev, J., Divband Soorati, M., Ferrante, E., Kosak, O., Montanier, J.-M., Mostaghim, S., Redpath, R., Timmis, J., Veenstra, F., Wahby, M., & Zamuda, A. (2016). Hybrid Societies: Challenges And Perspectives In The Design Of Collective Behavior In Self-Organizing Systems. Frontiers in Robotics and AI, 3. https://doi.org/10.3389/frobt.2016.00014
[20] Huang, L., & Zhu, Q. (2022). A Dynamic Game Framework For Rational And Persistent Robot Deception With An Application To Deceptive Pursuit-Evasion. IEEE Transactions on Automation Science and Engineering, 19(4), 2918–2932. https://doi.org/10.1109/TASE.2021.3097286
[21] Zhan, X., Xu, Y., Abdi, N., Collenette, J., Abu-Salma, R., & Sarkadi, S. (2024). Banal Deception Human-AI Ecosystems: A Study Of People’s Perceptions Of LLM-Generated Deceptive Behaviour (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2406.08386
[22] Anderson, D., Stephenson, M., Togelius, J., Salge, C., Levine, J., & Renz, J. (2018). Deceptive Games. In K. Sim & P. Kaufmann (Eds.), Applications Of Evolutionary Computation (Vol. 10784, pp. 376–391). Springer International Publishing. https://doi.org/10.1007/978-3-319-77538-8_26
[23] Ornik, M., & Topcu, U. (2018). Deception In Optimal Control. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 821–828. https://doi.org/10.1109/ALLERTON.2018.8635871
[24] Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., Righetti, L., & Saunders, W. (2021). Truthful AI: Developing And Governing AI That Does Not Lie (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2110.06674
[25] Tarsney, C. (2025). Deception And Manipulation In Generative AI. Philosophical Studies. https://doi.org/10.1007/s11098-024-02259-8
[26] Hasibuan, R. H., Aurelya Jessica Rawung, Denisha M. D. Paranduk, & Fidel Jeremy Wowiling. (2024). Artificial Intelligence In The Auspices Of Law: A Diverge Perspective. Mimbar Hukum, 36(1), 111–140. https://doi.org/10.22146/mh.v36i1.10827
[27] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
[28] Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W.-G. (2021). Framing Governance For A Contested Emerging Technology: Insights From AI Policy. Policy and Society, 40(2), 158–177. https://doi.org/10.1080/14494035.2020.1855800
[29] Cooper, A. F., Lee, K., Grimmelmann, J., Ippolito, D., Callison-Burch, C., Choquette-Choo, C. A., Mireshghallah, N., Brundage, M., Mimno, D., Choksi, M. Z., Balkin, J. M., Carlini, N., De Sa, C., Frankle, J., Ganguli, D., Gipson, B., Guadamuz, A., Harris, S. L., Jacobs, A. Z., … Zeide, E. (2023). Report Of The 1st Workshop On Generative AI And Law (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2311.06477
[30] Afgiansyah, A. (2023). Artificial Intelligence Neutrality: Framing Analysis Of GPT Powered-Bing Chat And Google Bard. Jurnal Riset Komunikasi, 6(2), 179–193. https://doi.org/10.38194/jurkom.v6i2.908
[31] Pataranutaporn, P., Archiwaranguprok, C., Chan, S. W. T., Loftus, E., & Maes, P. (2024). Synthetic Human Memories: AI-Edited Images And Videos Can Implant False Memories And Distort Recollection (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2409.08895
[32] Deceptive Audio Or Visual Media (“Deepfakes”) 2024 Legislation. (n.d.). Retrieved February 12, 2025, from https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation
[33] Alanazi, S., Asif, S., & Moulitsas, I. (2024b). Examining The Societal Impact And Legislative Requirements Of Deepfake Technology: A Comprehensive Study. International Journal of Social Science and Humanity. https://doi.org/10.18178/ijssh.2024.14.2.1194
[34] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance) https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng
[35]Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A Survey Of Examples, Risks, And Potential Solutions. Patterns, 5(5), 100988. https://doi.org/10.1016/j.patter.2024.100988
[36] Kim, T. W., Tong, Lu, Lee, K., Cheng, Z., Tang, Y., & Hooker, J. (2021). When Is It Permissible For Artificial Intelligence To Lie? A Trust-Based Approach (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2103.05434
[37] Fartash, K., Kheiri, E. and Baramaki, T. (2024). Providing A Framework For AI Infrastructure In Iran, With A Focus On Service Providers And Service Aggregators Of AI. Journal of Science and Technology Policy, 17(3), 9-25. doi: 10.22034/jstp.2025.11771.1815 (In Persian)
[38] Kung, H.-W., Varanka, T., Saha, S., Sim, T., & Sebe, N. (2024). Face Anonymization Made Simple (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2411.00762
[39] Carvalko, J. R. (2024). Generative AI, Ingenuity, And Law. IEEE Transactions on Technology and Society, 5(2), 169–182. https://doi.org/10.1109/TTS.2024.3413591
[40] Shahbazinia, M. and Zolghadr, M. J. (2024). Recognizing Artificial Intelligence (AI) As A Legal Person: Providing A Policy Proposal To The Iranian Legislator. Journal of Science and Technology Policy, 17(3), 41-52. doi: 10.22034/jstp.2025.11778.1819 (In Persian)