Skip to content

Examining Politically-relevant Bias in Large Language Models (LLMs) in the Swiss context

research areas

Artificial intelligence and governance
Computational Science
Digital Technology
Issue Identification

timeframe

2024 - 2025

Recently, Large Language Models (LLMs) like GPT-3.5 or GPT-4 have significantly impacted the AI field due to their text generation capabilities. With the launch of ChatGPT, a user-facing LLM-based chatbot, LLMs have become more accessible to the public, amplifying their societal implications. However, this advancement has brought forth concerns about political bias and misinformation spread through LLMs, undermining their credibility and posing notable societal risks.

The primary aim of this project is thus to investigate the extent to which politically-relevant bias is present in user-facing LLM-based chatbots in the Swiss context. The project uses a combination of methods from computer science as well as communication and political science approaches, led by the team of PIs with interdisciplinary expertise.