Press "Enter" to skip to content

Using Underrepresented Languages in Prompts Can Make LLMs Say Harmful Things

OpenAI’s GPT-4 and Anthropic’s Claude 3 Sonnet can be tricked into generating unsafe content – such as instructions on how to create and distribute malware – by using prompts in underrepresented languages, according to a new paper.

Listen to an AI-generated podcast discussing the paper, which will be presented at the prestigious NLP conference, EMNLP 2024:

Description from the authors: This study identifies the potential vulnerabilities of Large Language Models (LLMs) to ‘jailbreak’ attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language.

We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet.

Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model’s learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.

Access the paper.

Author