Study Uncovers Political Biases in AI: ChatGPT Shifts, Bard Leans Left
May 20, 2024![Study Uncovers Political Biases in AI: ChatGPT Shifts, Bard Leans Left](https://cdn.brief.news/images/stories/304ea2d7c85f96b0b7aeb11c30d4f51311977ae5526047d4ed24f79ac05300ef7a44374ec0e5063328d46817b6f7088b9c83c454b63d1fcb6355cff37d16fc41.jpg)
Researchers discovered subtle political biases in AI-generated text from ChatGPT and Bard.
ChatGPT's political orientation shifted over time and varied across languages.
Bard consistently exhibited a left-leaning bias.
Undetected biases can lead to manipulation.
Authors developed models to identify biases in text generation.
A multilingual corpus of newspaper articles with political stance annotations was constructed.
The study shows the feasibility of creating meaningful political stance models using distant supervision with a diverse corpus.
Recognizing biases in AI-generated content is vital for a more informed understanding of news.
Summary based on 6 sources