
DOI: 10.11118/978-80-7509-990-7-0049
PROTECT YOURSELF FROM AI HALLUCINATIONS: EXPLORING ORIGINSAND BEST PRACTICES
- Jana Dannhoferová1, Petr Jedlička1
Although AI-powered chat systems like ChatGPT can be trusted, we shouldn’t rely on them completely. They can sometimes produce irrelevant, misleading or even false responses, known as hallucination effects. The causes can be both systemic and user related. User behavior, particularly in the area of prompt engineering, has an impact on the quality and accuracy of the result provided. Based on the literature review, we have identified the most common types of hallucination effects and provided examples in created categories. Finally, we have highlighted what users should consider when writing prompts and given recommendations for them to minimize hallucination effects in responses obtained from AI systems. Understanding how hallucinations occur can help ensure that these powerful tools are used responsibly and effectively. However, the quality of responses is always a matter of judgment, and the user’s level of expertise and critical thinking is an important factor.
Klíčová slova: artificial intelligence, AI systems, large language models, hallucination effect, text-to-text prompt engineering
stránky: 49-59, online: 2024
Reference
- ATHALURI, S., MANTHENA, S., KESAPRAGADA, V., YARLAGADDA, V., DAVE, T. and DUDDUMPUDI, R. T. S. 2023. Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus, 15(4), e37432. DOI: 10.7759/cureus.37432
Přejít k původnímu zdroji...
- BANG, Y., CAHYAWIJAYA, S., LEE, N., DAI, W., SU, D., WILIE, B., LOVENIA, H., JI, Z., YU, T., CHUNG, W., DO, Q. V., XU, Y. and FUNG, P. 2023. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. arXiv preprint arXiv: 2302.04023. DOI: 10.48550/arXiv.2302.04023
Přejít k původnímu zdroji...
- BONTRIDDER, N. and POULLET, Y. 2021. The role of artificial intelligence in disinformation. Data & Policy, 3, e32. DOI: 10.1017/dap.2021.20.
Přejít k původnímu zdroji...
- BUOLAMWINI, J. 2019. Artificial intelligence has a problem with gender and racial bias: Here's how to solve it. Time [Online]. Available at: https://time.com/5520558/artificial-intelligence-racial-gender-bias/ [Accessed 2024, February 19].
- CHENG, F., ZOUHAR, V., ARORA, S., SACHAN, M., STROBELT, H. and EL-ASSADY, M. 2023. RELIC: Investigating large language model responses using self-consistency. arXiv preprint arXiv: 2311.16842. DOI: 10.48550/arXiv.2311.16842
Přejít k původnímu zdroji...
- COSSINS, D. 2018. Discriminating algorithms: 5 times AI showed prejudice. Newscientist [Online]. Available at: https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/ [Accessed 2024, February 19].
- FUJIMOTO, S. and TAKEMOTO, K. 2023. Revisiting the political biases of ChatGPT. Frontiers in Artificial Intelligence, 6, 2023. DOI: 10.3389/frai.2023.1232003
Přejít k původnímu zdroji...
- GILLHAM, J. 2023. AI hallucination factual error problems [Online]. Available at: https://originality.ai/blog/ai-hallucination-factual-error-problems [Accessed 2024, February 19].
- GUPTA, A., HATHWAR, D. and VIJAYAKUMAR, A. 2020. Introduction to AI chatbots. International Journal of Engineering Research and Technology, 9, 255-258. DOI: 10.17577/IJERTV9IS070143
Přejít k původnímu zdroji...
- HARTMANN, J., SCHWENYOW, J. and WITTE, M. 2023. The political ideology of conversational AI: Converging evidence on ChaGPT's pro-environmental, left-libertarian orientation. arXiv preprint arXiv: 2301.01768. DOI: 10.48550/arXiv.2301.01768
Přejít k původnímu zdroji...
- KADAVATH, S., CONERLY, T., ASKELL, A., HENIGHAN, T., DRAIN, D. et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv: 2207.05221. DOI: 10.48550/arXiv.2207.05221.
Přejít k původnímu zdroji...
- KEARY, T. 2024. AI Hallutinations [Online]. Available at: https://www.techopedia.com/definition/ai-hallucination [Accessed 2024, February 19].
- KORZYNSKI, P., MAZUREK, G., KRZYPKOWSKA, P. and KURASINSKI, A. 2023. Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT. Entrepreneurial Business and Economics Review, 11(3), 25-37. DOI: 10.15678/EBER.2023.110302
Přejít k původnímu zdroji...
- LEE, K. 2023. Understanding LLM Hallucinations and how to mitigate them [Online]. Available at: https://kili-technology.com/large-language-models-llms/understanding-llm-hallucinations-and-how-to-mitigate-them [Accessed 2024, February 19].
- LUTKEVICH, B. 2023. AI hallucination [Online]. Available at: https://www.techtarget.com/whatis/definition/AI-hallucination. [Accessed 2024, February 19].
- MARR, B. 2023. ChatGPT: What Are Hallucinations And Why Are They A Problem For AI Systems [Online]. Available at: https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/ [Accessed 2024, February 19].
- MISHRA, A. N. 2023. Hallucination in Large Language Models [Online]. Available at: https://medium.com/@asheshnathmishra/hallucination-in-large-language-models-2023-f7b4e77855ae [Accessed 2024, February 19].
- REYNOLDS, L. and MCDONELL, K. (2021). Prompt programming for large language models: Beyond the few-shot paradigm. Paper presented at: the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. DOI: arXiv:2102.07350
Přejít k původnímu zdroji...
- RIAZ A. 2023. 29 Mind-Blowing Examples of AI Hallucinations [Online]. Available at: https://vividexamples.com/examples-of-ai-hallucinations/ [Accessed 2024, February 19].
- RICHARDSON, C. and HECK, L. 2023. Commonsense Reasoning for Conversational AI: A Survey of the State of the Art. arXiv preprint arXiv: 2302.07926. DOI: 10.48550/arXiv.2302.07926
Přejít k původnímu zdroji...
- ROHRBACH, A, HENDRICKS, L. A., BURNS, K., DARRELL, T. and SAENKO, K. 2018. Object Hallucination in Image Captioning. arXiv preprint arXiv: 1809.02156. DOI: 10.48550/arXiv.1809.02156
Přejít k původnímu zdroji...
- SHEN, Y., HEACOCK, L., ELIAS, J., HENTEL, K. D., REIG, B., SHIH, G. and MOY, L. 2023. ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology, 307(2). DOI: 10.1148/radiol.230163
Přejít k původnímu zdroji...
- SONG, W., LIYAN, T., AKASH, M., JUSTIN, F. R., GEORGE, S., YING, D. and YIFAN, P. 2022. Trustworthy assertion classification through prompting. Journal of Biomedical Informatics, 132, 104139. ISSN 1532-0464. DOI: 10.1016/j.jbi.2022.104139
Přejít k původnímu zdroji...
- SRINIVASAN, R. and CHANDER, A. 2021. Biases in AI Systems. Communications of the ACM, 64(8), 44-49. DOI: 10.1145/3464903
Přejít k původnímu zdroji...
- SUSHIR T. 2024. What is AI Hallucination, and Can it Be Fixed? Geekflare [Online]. Available at: https://geekflare.com/ai-hallucination/ [Accessed 2024, February 19].
- WANG, X., WEI, J., SCHUURMANS, D., LE, Q., CHI, E., NARANG, S., CHOWDHERY, A. and ZHOU, D. 2022. Self-Consistency Improves Chain of Thought Reasoning in Language Models. arXiv preprint arXiv: 2203.11171. DOI: 10.48550/arXiv.2203.11171
Přejít k původnímu zdroji...
- WEI, J., WANG, X., SCHUURMANS, D., BOSMA, M., ICHTER, B., XIA, F., CHI, E. H., LE, Q. V. and ZHOU, D. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv preprint arXiv: 2201.11903v6. DOI: 10.48550/arXiv.2201.11903
Přejít k původnímu zdroji...
- XIAO, Y. and WANG, W. Y. 2021. On Hallucination and Predictive Uncertainty in Conditional Language Generation. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Main Volume, 2734-2744. Association for Computational Linguistics.
Přejít k původnímu zdroji...