[2] Sodian, B., Taylor, C., Harris, P. L., & Perner, J. (1991). 
Early Deception And The Child’s Theory Of Mind: False Trails And Genuine Markers. 
Child Development, 62(3), 468. 
https://doi.org/10.2307/1131124
                                                                                                                 [3] Chandler, M., Fritz, A. S., & Hala, S. (1989). 
Small-Scale Deceit: Deception As A Marker Of Two-, Three-, And Four-Year-Olds’ Early Theories Of Mind. 
Child Development, 60(6), 1263. 
https://doi.org/10.2307/1130919
                                                                                                                                                                                                                                                                                                                                                                                                                                                                 [7] Bakhtin, A., Brown, N., Dinan, E., Farina, G., Flaherty, C., Fried, D., Goff, A., Gray, J., Hu, H., Jacob, A. P., Komeili, M., Konath, K., Kwon, M., Lerer, A., Lewis, M., Miller, A. H., Mitts, S., Renduchintala, A., Roller, S., … Zijlstra, M. (2022). 
Human-Level Play In The Game Of Diplomacy By Combining Language Models With Strategic Reasoning. 
Science, 378(6624), 1067–1074. 
https://doi.org/10.1126/science.ade9097
                                                                                                                                                                                                                                 [9] Dogra, A., Pillutla, K., Deshpande, A., Sai, A. B., Nay, J., Rajpurohit, T., Kalyan, A., & Ravindran, B. (2024). 
Deception In Reinforced Autonomous Agents (Version 2). 
arXiv. 
https://doi.org/10.48550/ARXIV.2405.04325
                                                                                                                                                                                                                                 [11] Zhong, H., O’Neill, E., & Hoffmann, J. A. (2024). 
Regulating AI: Applying Insights From Behavioural Economics And Psychology To The Application Of Article 5 Of The EU AI Act. 
Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20001–20009. 
https://doi.org/10.1609/aaai.v38i18.29977
                                                                                                                 [12] Alanazi, S., Asif, S., & Moulitsas, I. (2024a). 
Examining The Societal Impact And Legislative Requirements Of Deepfake Technology: A Comprehensive Study. 
International Journal of Social Science and Humanity. 
https://doi.org/10.18178/ijssh.2024.14.2.1194
                                                                                                                 [13] Faraoni, S. (2023). 
Persuasive Technology And Computational Manipulation: Hypernudging Out Of Mental Self-Determination. 
Frontiers in Artificial Intelligence, 6, 1216340. 
https://doi.org/10.3389/frai.2023.1216340
                                                                                                                                                                                                                                 [15] Castelfranchi, C. & Yao-Hua Tan. (2001). 
The Role Of Trust And Deception In Virtual Societies. Proceedings Of The 34th Annual Hawaii International Conference on System Sciences, 8. 
https://doi.org/10.1109/HICSS.2001.927042
                                                                                                                 [16] Li, L., Ma, H., Kulkarni, A. N., & Fu, J. (2023). 
Dynamic Hypergames For Synthesis Of Deceptive Strategies With Temporal Logic Objectives. 
IEEE Transactions on Automation Science and Engineering, 20(1), 334–345. 
https://doi.org/10.1109/TASE.2022.3150167
                                                                                                                 [17] Savas, Y., Verginis, C. K., & Topcu, U. (2022). 
Deceptive Decision-Making Under Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5332–5340. 
https://doi.org/10.1609/aaai.v36i5.20470
                                                                                                                 [18] Sarkadi, Ş., Panisson, A. R., Bordini, R. H., McBurney, P., Parsons, S., & Chapman, M. (2019). 
Modelling Deception Using Theory Of Mind In Multi-Agent Systems. 
AI Communications, 32(4), 287–302. 
https://doi.org/10.3233/AIC-190615
                                                                                                                 [19] Hamann, H., Khaluf, Y., Botev, J., Divband Soorati, M., Ferrante, E., Kosak, O., Montanier, J.-M., Mostaghim, S., Redpath, R., Timmis, J., Veenstra, F., Wahby, M., & Zamuda, A. (2016). 
Hybrid Societies: Challenges And Perspectives In The Design Of Collective Behavior In Self-Organizing Systems. 
Frontiers in Robotics and AI, 3. 
https://doi.org/10.3389/frobt.2016.00014
                                                                                                                 [20] Huang, L., & Zhu, Q. (2022). 
A Dynamic Game Framework For Rational And Persistent Robot Deception With An Application To Deceptive Pursuit-Evasion. 
IEEE Transactions on Automation Science and Engineering, 19(4), 2918–2932. 
https://doi.org/10.1109/TASE.2021.3097286
                                                                                                                 [21] Zhan, X., Xu, Y., Abdi, N., Collenette, J., Abu-Salma, R., & Sarkadi, S. (2024
). Banal Deception Human-AI Ecosystems: A Study Of People’s Perceptions Of LLM-Generated Deceptive Behaviour (Version 1). 
arXiv. 
https://doi.org/10.48550/ARXIV.2406.08386
                                                                                                                 [22] Anderson, D., Stephenson, M., Togelius, J., Salge, C., Levine, J., & Renz, J. (2018). 
Deceptive Games. In K. Sim & P. Kaufmann (Eds.), 
Applications Of Evolutionary Computation (Vol. 10784, pp. 376–391). 
Springer International Publishing. 
https://doi.org/10.1007/978-3-319-77538-8_26
                                                                                                                                                                                                                                 [24] Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., Righetti, L., & Saunders, W. (2021). 
Truthful AI: Developing And Governing AI That Does Not Lie (Version 1). 
arXiv. 
https://doi.org/10.48550/ARXIV.2110.06674
                                                                                                                                                                                                                                 [26] Hasibuan, R. H., Aurelya Jessica Rawung, Denisha M. D. Paranduk, & Fidel Jeremy Wowiling. (2024). 
Artificial Intelligence In The Auspices Of Law: A Diverge Perspective. 
Mimbar Hukum, 36(1), 111–140. 
https://doi.org/10.22146/mh.v36i1.10827
                                                                                                                 [27] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 
(Artificial Intelligence Act) Text with EEA relevance. 
https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
                                                                                                                 [28] Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W.-G. (2021). 
Framing Governance For A Contested Emerging Technology: Insights From AI Policy. 
Policy and Society, 40(2), 158–177. 
https://doi.org/10.1080/14494035.2020.1855800
                                                                                                                 [29] Cooper, A. F., Lee, K., Grimmelmann, J., Ippolito, D., Callison-Burch, C., Choquette-Choo, C. A., Mireshghallah, N., Brundage, M., Mimno, D., Choksi, M. Z., Balkin, J. M., Carlini, N., De Sa, C., Frankle, J., Ganguli, D., Gipson, B., Guadamuz, A., Harris, S. L., Jacobs, A. Z., … Zeide, E. (2023). 
Report Of The 1st Workshop On Generative AI And Law (Version 3). 
arXiv. 
https://doi.org/10.48550/ARXIV.2311.06477
                                                                                                                 [30] Afgiansyah, A. (2023). 
Artificial Intelligence Neutrality: Framing Analysis Of GPT Powered-Bing Chat And Google Bard. 
Jurnal Riset Komunikasi, 6(2), 179–193. 
https://doi.org/10.38194/jurkom.v6i2.908
                                                                                                                 [31] Pataranutaporn, P., Archiwaranguprok, C., Chan, S. W. T., Loftus, E., & Maes, P. (2024). 
Synthetic Human Memories: AI-Edited Images And Videos Can Implant False Memories And Distort Recollection (Version 1).
 arXiv. 
https://doi.org/10.48550/ARXIV.2409.08895
                                                                                                                                                                                                                                 [33] Alanazi, S., Asif, S., & Moulitsas, I. (2024b). 
Examining The Societal Impact And Legislative Requirements Of Deepfake Technology: A Comprehensive Study. 
International Journal of Social Science and Humanity. 
https://doi.org/10.18178/ijssh.2024.14.2.1194
                                                                                                                 [34] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC 
(Digital Services Act) (Text with EEA relevance) 
https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng
                                                                                                                 [35]Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: 
A Survey Of Examples, Risks, And Potential Solutions. 
Patterns, 5(5), 100988. 
https://doi.org/10.1016/j.patter.2024.100988
                                                                                                                 [36] Kim, T. W., Tong, Lu, Lee, K., Cheng, Z., Tang, Y., & Hooker, J. (2021). 
When Is It Permissible For Artificial Intelligence To Lie? A Trust-Based Approach (Version 2). 
arXiv. 
https://doi.org/10.48550/ARXIV.2103.05434
                                                                                                                 [37] Fartash, K., Kheiri, E. and Baramaki, T. (2024). Providing A Framework For AI Infrastructure In Iran, With A Focus On Service Providers And Service Aggregators Of AI. Journal of Science and Technology Policy, 17(3), 9-25. doi: 10.22034/jstp.2025.11771.1815 (In Persian)
                                                                                                                                                                                                                                                                                                                                                [40] Shahbazinia, M. and Zolghadr, M. J. (2024). Recognizing Artificial Intelligence (AI) As A Legal Person: Providing A Policy Proposal To The Iranian Legislator. Journal of Science and Technology Policy, 17(3), 41-52. doi: 10.22034/jstp.2025.11778.1819 (In Persian)