
Multilingual artificial intelligence often reinforces bias
A Johns Hopkins study reveals that multilingual AI, like ChatGPT, exacerbates a digital language divide, favoring dominant languages such as English and marginalizing minority ones. Rather than democratizing information, these large language models (LLMs) create "information cocoons."
The language used in a query significantly influences the information received, often imposing dominant perspectives (e.g., American English) on users querying in low-resource languages when no corresponding articles exist. This linguistic imperialism deepens divides, leading to vastly different understandings of global conflicts.
To counter this bias and ensure equitable access to diverse perspectives, researchers recommend developing dynamic benchmarks, exploring varied training strategies, and fostering information literacy to prevent over-reliance on biased LLMs.