DOI - Mendel University Press

DOI identifiers

DOI: 10.11118/978-80-7701-030-6-0126

GENERATIVNÍ UMĚLÁ INTELIGENCE (AI) JAKO ZRCADLO GENDEROVÝCH BIASŮ / GENERATIVE ARTIFICIAL INTELLIGENCE (AI) AS A MIRROR OF GENDER BIASES

Martin Richter ORCID...1
1 Fakulta sociálních věd, Univerzita Karlova, Smetanovo nábř. 6, 110 00 Praha – Staré Město

This article focuses on the issue of gender biases in generative artificial intelligence (AI) models, which can influence decision-making processes and the resulting societal dynamics. Through an analysis of responses from widely used language models (GPT-4o mini, GPT-4o, Llama 3.1, Claude 3.5 Sonnet, Google Gemini, and Mistral), this study investigates how these systems reproduce, amplify, or mitigate gender stereotypes in their generated outputs. By combining experimental design with content analysis, the model responses are classified as stereotypical, anti-stereotypical, or neutral. In this context, the results reveal significant differences between individual models. While some models, such as GPT-4o mini and Llama 3.1, systematically reproduce gender biases, others, like Claude 3.5 Sonnet and GPT-4o, exhibit a more inclusive approach. The study also highlights the significant social and economic implications of these biases, particularly in areas such as workforce recruitment and career counseling, where AI-generated biases can further exacerbate gender inequalities.

Keywords: Generative AI, LLMs, AI Generated Biases, ChatGPT

pages: 126-133, Published: 2025, online: 2025



References

  1. BANSAL, C., PANDEY, K. K., GOEL, R., SHARMA, A., JANGIRALA, S. 2023. Artificial intelligence (AI) bias impacts: Classification framework for effective mitigation. Issues in Information Systems. 24(4), 367-389. https://doi.org/10.48009/4_iis_2023_128 Go to original source...
  2. ČESKÝ STATISTICKÝ ÚŘAD. 2023. Digitální ekonomika v číslech 2023: Česká republika a EU [cit. 2024-10-08]. https://csu.gov.cz/docs/107508/729bcd9a-0da5-70cb-734f-c6e460a70af7/06300523_cela.pdf?version=1.0
  3. FANG, X., CHE, S., MAO, M., ZHANG, H., ZHAO, M., ZHAO, X. 2023. Bias of AI-generated content: an examination of news produced by large language models. Scientific Reports. 14, 5224. Go to original source...
  4. FRIEDMAN, B., NISSENBAUM, H. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS). 14(3), 330-347. https://doi.org/10.1145/230538.230561 Go to original source...
  5. GROSS, N. 2023. What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Soc. Sci. 12(8), 435. https://doi.org/10.3390/socsci12080435 Go to original source...
  6. HAENLEIN, M., KAPLAN, A. 2019. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review. 61(4), 000812561986492. https://doi.org/10.1177/0008125619864925 Go to original source...
  7. GUPTA, M., PARRA, C. M. DENNEHY, D. 2022. Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter? Inf. Syst. Front. 24, 1465-1481.https://doi.org/10.1007/s10796-021-10156-2 Go to original source...
  8. HOCHMAIR, H. H., JUHÁSZ, L. and KEMP, T. 2024. Correctness comparison of ChatGPT-4, Gemini, Claude-3, and Copilot for spatial tasks. Transactions in GIS. 28(7), 2219-2231. https://doi.org/10.1111/tgis.13233 Go to original source...
  9. KAPLAN, D. M., PALITSKY, R., ARCONADA ALVAREZ, S. J., POZZO, N. S., GREENLEAF, M. N., ATKINSON, C. A., and LAM, W. A. 2024. What's in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT. Journal of Medical Internet Research. 26, e51837. https://doi.org/10.2196/51837 Go to original source...
  10. LIN, Z., GUAN, S., ZHANG, W. et al. 2024. Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models. Artif Intell Rev. 57, 243. https://doi.org/10.1007/s10462-024-10896-y Go to original source...
  11. LUCY, L., BAMMAN, D. 2021. Gender and Representation Bias in GPT-3 Generated Stories. In: Proceedings of the Third Workshop on Narrative Understanding. Virtual. Association for Computational Linguistics, p. 48-55. Go to original source...
  12. O'CONNOR, S., LIU, H. 2024. Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities. AI & Soc. 39, 2045-2057. https://doi.org/10.1007/s00146-023-01675-4 Go to original source...
  13. SANTURKAR, S., DURMUS, E., LADHAK, F., LEE, C., LIANG, P., HASHIMOTO, T. 2023. Whose opinions do language models reflect? arXiv:2303. 17548 https://doi.org/10.48550/arXiv.2303.17548 Go to original source...
  14. TAO, Y., VIBERG, O., BAKER, R. S., KIZILCEC, R. F. 2024. Cultural bias and cultural alignment of large language models. PNAS Nexus. 3(9), 346. https://doi.org/10.1093/pnasnexus/pgae346 Go to original source...
  15. RANE, N. 2024. Role and challenges of ChatGPT, Gemini, and similar generative artificial intelligence in human resource management. Studies in Economics and Business Relations. 5(1), 11-23. https://doi.org/10.48185/sebr.v5i1.1001 Go to original source...
  16. RANE, N. L., CHOUDHARY, S. P., RANE, J. 2024. Gemini versus ChatGPT: Applications, performance, architecture, capabilities, and implementation. Journal of Applied Artificial Intelligence. 5(1), 69-93. https://doi.org/10.48185/jaai.v5i1.1052 Go to original source...
  17. SALLES, A, AWAD, M, GOLDIN, L, et al. 2019. Estimating Implicit and Explicit Gender Bias Among Health Care Professionals and Surgeons. JAMA Netw Open. 2(7): e196545. https://doi.org/10.1001/jamanetworkopen.2019.6545 Go to original source...
  18. SHIHADEH, J., ACKERMAN, M., TROSKE, A., LAWSON, N., GONZALEZ, E. 2022. Brilliance Bias in GPT-3. In: 2022 IEEE Global Humanitarian Technology Conference (GHTC). p. 62-69. Go to original source...
  19. SOUNDARARAJAN, S., JEYARAJ, M. N., DELANY, S. 2023. Using ChatGPT to Generate Gendered Language. In: 31st Irish Conference on Artificial Intelligence and Cognitive Science (AICS). IEEE, p. 1-8. Go to original source...
  20. STEINBERG, A. L., HOHENBERGER, C. 2023. Can AI close the gender gap in the job market? Individuals' preferences for AI evaluations. Computers in Human Behavior Reports. 10, 100287. https://doi.org/10.1016/j.chbr.2023.100287 Go to original source...
  21. TEGMARK, M. 2020. Život 3.0: Člověk v éře umělé inteligence. Argo/Dokořán. ISBN 978-80-7363-948-8
  22. THAKUR, V. 2023. Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and Addressing Sociological Implications. ArXiv. 2307.09162. https://doi.org/10.48550/arXiv.2307.09162 Go to original source...
  23. VARSHA, P. S. 2023. How can we manage biases in artificial intelligence systems - A systematic literature review. International Journal of Information Management Data Insights. 3(1), 100165. https://doi.org/10.1016/j.jjimei.2023.100165 Go to original source...
  24. VEALE, M., VAN KLEEK, M., BINNS, R. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. April 21--26, Montreal, Canada. https://doi.org/10.48550/arXiv.1802.01029 Go to original source...