Breaking the Bias: Gender Fairness in LLMs Using Prompt Engineering and In-Context Learning

697 views

Satyam Dwivedi1* , Sanjukta Ghosh2 , Shivam Dwivedi3
1,2,3 HSS, IIT BHU, India. *Corresponding author. 

Rupkatha Journal, Vol. 15, Issue 4, 2023. https://doi.org/10.21659/rupkatha.v15n4.10
[Article History: Received: 31 October 2023. Revised: 06 December 2023. Accepted: 07 December 2023. Published: 14 December 2023
]
Full-Text PDF Issue Access

Abstract

Large Language Models (LLMs) have been identified as carriers of societal biases, particularly in gender representation. This study introduces an innovative approach employing prompt engineering and in-context learning to rectify these biases in LLMs. Through our methodology, we effectively guide LLMs to generate more equitable content, emphasizing nuanced prompts and in-context feedback. Experimental results on openly available LLMs such as BARD, ChatGPT, and LLAMA2-Chat indicate a significant reduction in gender bias, particularly in traditionally problematic areas such as ‘Literature’. Our findings underscore the potential of prompt engineering and in-context learning as powerful tools in the quest for unbiased AI language models.

Keywords: Prompt engineering, In-context learning, Gender bias, Large Language Models, Equitable content, Bias mitigation strategies.

Sustainable Development Goals: Gender Equality
Citation: Dwivedi, S., Ghosh, S., Dwivedi, S. (2023). Breaking the Bias: Gender Fairness in LLMs Using Prompt Engineering and In-Context Learning. Rupkatha Journal 15:4. https://doi.org/10.21659/rupkatha.v15n4.10