Analisis Kritis terhadap Kebijakan dan Praktik Etika pada ChatGPT

Authors

  • Devi Dwi Purwanto Universitas Negeri Malang
  • Aji Prasetya Wibawa Universitas Negeri Malang
  • Hakkun Elmunsyah Universitas Negeri Malang
  • Siti Sendari Universitas Negeri Malang

DOI:

https://doi.org/10.36312/xv4mqp17

Keywords:

Etika AI, Generative AI, ChatGPT, Transparansi, Akuntabilitas, Bias Algoritmik, Kebijakan AI

Abstract

Perkembangan pesat dalam teknologi Generative AI berbasis teks melalui model seperti ChatGPT membawa dampak yang signifikan di berbagai sektor. Namun, penerapan teknologi ini juga memunculkan tantangan etika yang mendalam, khususnya terkait dengan transparansi, keadilan, akuntabilitas, dan pengelolaan bias algoritmik. Artikel ini mengkaji secara kritis penerapan kebijakan etika dalam desain dan pengembangan ChatGPT, dengan fokus pada prinsip-prinsip etika fundamental seperti keadilan, non-diskriminasi, dan transparansi. Metode analisis yang digunakan dalam artikel ini adala Ex Post Facto, dimana analisis dilakukan setelah peristiwa atau kebijakan diterapkan, guna menilai dampaknya terhadap pengelolaan bias algoritmik dan risiko disinformasi. Pendekatan etika normative juga diterapkan untuk mengevaluasi sejau mana kebijakan diterapkan oleh OpenAI sejalan dengan nilai-nilai keadilan. Penelitian ini juga mengidentifikasi tantangan besar yang dihadapi oleh pengembang dalam memastikan bahwa Generative AI dapat digunakan secara adil dan bertanggung jawab. Selain itu, artikel ini memberikan rekomendasi untuk meningkatkan transparansi dan akuntabilitas dalam penggunaan AI, guna menciptakan ekosistem yang lebih inklusif dan dapat dipertanggungjawabkan. Keunikan artikel ini terletak pada analisis mendalam terhadap kebijakan etika yang diterapkan oleh OpenAI, serta focus pada prinsip-prinsip etika fundamental dalam konteks pengembangan Generative AI, yang belum banyak dibahas dalam penelitian sebelumnya. Kesimpulannya, keberhasilan AI yang etis bergantung pada penerapan kebijakan yang komprehensif dan sistematis, yang tidak hanya efisien secara teknis tetapi juga menghormati nilai-nilai sosial dan hak-hak individu.

A critical analysis of Ethical Policies and Practices at ChatGPT

The rapid advancement of text-based Generative AI technologies through models such as ChatGPT has had a significant impact across various sectors. However, the adoption of this technology also raises profound ethical challenges, particularly with regard to transparency, fairness, accountability, and the management of algorithmic bias. This article critically examines the implementation of ethical policies in the design and development of ChatGPT, with a focus on fundamental ethical principles such as fairness, non-discrimination, and transparency. The analytical method employed in this study is an ex post facto approach, in which analysis is conducted after the implementation of events or policies to assess their impact on the management of algorithmic bias and the risks of misinformation. A normative ethical approach is also applied to evaluate the extent to which OpenAI’s policies align with principles of justice. This study identifies major challenges faced by developers in ensuring that Generative AI is used fairly and responsibly. In addition, the article offers recommendations to enhance transparency and accountability in AI deployment in order to foster a more inclusive and accountable ecosystem. The novelty of this article lies in its in-depth analysis of the ethical policies implemented by OpenAI and its focus on fundamental ethical principles in the context of Generative AI development, an area that has received limited attention in prior research. In conclusion, the success of ethical AI depends on the implementation of comprehensive and systematic policies that are not only technically efficient but also respectful of social values and individual rights.

References

Aldhafeeri, F. M. (2024). Navigating the ethical landscape of artificial intelligence in radiography: A cross-sectional study of radiographers’ perspectives. BMC Medical Ethics, 25(1), 52. https://doi.org/10.1186/s12910-024-01052-w

Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics, 11(3), 58. https://doi.org/10.3390/informatics11030058

Alvarez, J. M., Colmenarejo, A. B., Elobaid, A., Fabbrizzi, S., Fahimi, M., Ferrara, A., Ghodsi, S., Mougan, C., Papageorgiou, I., Reyero, P., Russo, M., Scott, K. M., State, L., Zhao, X., & Ruggieri, S. (2024). Policy advice and best practices on bias and fairness in AI. Ethics and Information Technology, 26(2), 31. https://doi.org/10.1007/s10676-024-09746-w

Analyzing Harms from AI-Generated Images and Safeguarding Online Authenticity. (2024). RAND Corporation. https://doi.org/10.7249/PEA3131-1

Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K., & Kujala, S. (2023). Transparency and explainability of AI systems: From ethical guidelines to requirements. Information and Software Technology, 159, 107197. https://doi.org/10.1016/j.infsof.2023.107197

Buijsman, S. (2024). Transparency for AI systems: A value-based approach. Ethics and Information Technology, 26(2), 34. https://doi.org/10.1007/s10676-024-09770-w

Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273

Friedrich, F., Brack, M., Struppek, L., Hintersdorf, D., Schramowski, P., Luccioni, S., & Kersting, K. (2025). Auditing and instructing text-to-image generation models on fairness. AI and Ethics, 5(3), 2103–2123. https://doi.org/10.1007/s43681-024-00531-5

Hosseini Tabaghdehi, S. A., & Ayaz, Ö. (2025). AI ethics in action: A circular model for transparency, accountability and inclusivity. Journal of Managerial Psychology. https://doi.org/10.1108/JMP-03-2024-0177

Jain, A. (2025). Designing for Ethical and Inclusive AI Through a Human-Centered Design Lens. Global Business & Economics Journal. https://doi.org/10.70924/f83n6wqz/kdxoixwt

Kharvi, P. L. (2024). Understanding the Impact of AI-Generated Deepfakes on Public Opinion, Political Discourse, and Personal Security in Social Media. IEEE Security & Privacy, 22(4), 115–122. https://doi.org/10.1109/MSEC.2024.3405963

Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: What it is and how it works. AI & SOCIETY, 39(4), 1871–1882. https://doi.org/10.1007/s00146-023-01635-y

Olatunji Akinrinola, Chinwe Chinazo Okoye, Onyeka Chrisanctus Ofodile, & Chinonye Esther Ugochukwu. (2024). Navigating and reviewing ethical dilemmas in AI development: Strategies for transparency, fairness, and accountability. GSC Advanced Research and Reviews, 18(3), 050–058. https://doi.org/10.30574/gscarr.2024.18.3.0088

Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems, 34(2), 101885. https://doi.org/10.1016/j.jsis.2024.101885

Radanliev, P. (2025). AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development. Applied Artificial Intelligence, 39(1), 2463722. https://doi.org/10.1080/08839514.2025.2463722

Ricciardi Celsi, L., & Zomaya, A. Y. (2025). Perspectives on Managing AI Ethics in the Digital Age. Information, 16(4), 318. https://doi.org/10.3390/info16040318

Saeidnia, H. R. (2023). Ethical artificial intelligence (AI): Confronting bias and discrimination in the library and information industry. Library Hi Tech News. https://doi.org/10.1108/LHTN-10-2023-0182

Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes, 188, 104405. https://doi.org/10.1016/j.obhdp.2025.104405

Singhal, A., Neveditsin, N., Tanveer, H., & Mago, V. (2024). Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping Review. JMIR Medical Informatics, 12, e50048. https://doi.org/10.2196/50048

Spinellis, D. (2025). False authorship: An explorative case study around an AI-generated article published under my name. Research Integrity and Peer Review, 10(1), 8. https://doi.org/10.1186/s41073-025-00165-z

Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44–54. https://doi.org/10.1145/2447976.2447990

The Belmont Report. (n.d.).

The Nuremberg Principles Conference Program 3-23-2023. (n.d.).

Whittaker, M., Crawford, K., Dobbe, R., & Fried, G. (2018). AI Now Report 2018. AI Now Institute. https://ainowinstitute.org/wp-content/uploads/2023/04/AI_Now_2018_Report.pdf

World Medical Association. (2025). World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Participants. JAMA, 333(1), 71. https://doi.org/10.1001/jama.2024.21972

Downloads

Published

2025-12-01

Issue

Section

Articles

How to Cite

Purwanto, D. D., Wibawa, A. P. ., Elmunsyah, H. ., & Sendari, S. . (2025). Analisis Kritis terhadap Kebijakan dan Praktik Etika pada ChatGPT. Reflection Journal, 5(2), 1013-1025. https://doi.org/10.36312/xv4mqp17

Similar Articles

61-68 of 68

You may also start an advanced similarity search for this article.