More and more voters are using AI chatbots to determine which party to vote for in the Oct. 29 House of Representatives elections. But these opinions are unreliable and clearly colored. That's according to research by the Autoriteit Persoonsgegevens (AP).

The AP compared 4 well-known chatbots to the online voting tools Kieskompas and StemWijzer. The research shows that the chatbots remarkably often come up with the same 2 parties, regardless of the user's question or command. In over 56% of cases, PVV or GroenLinks-PvdA come out on top. In one chatbot, this is even true in more than 80% of cases.
Other parties, such as D66, SP, VVD or PvdD, come up as the first choice much less often. Some parties, such as BBB, CDA, SGP or DENK, even almost never. Not even when the user's input exactly matches the views of one of these parties.
Monique Verdier, vice president of the AP: "Chatbots seem like clever helpers, but as voting aids they routinely miss the mark. As a result, voters may - unknowingly - be advised to vote for a party that does not best suit their preferences. This directly affects a cornerstone of democracy: the integrity of free and fair elections. We therefore warn against using AI chatbots for voting advice, because their operation is not clear and verifiable. And we call on providers of chatbots to prevent their systems from being used for voting advice.
According to the AP, the results show that chatbots do not function neutrally when compared to traditional voting aids, such as Kieskompas and StemWijzer. Chatbots were not developed as voting aids. Their opinions are based on unverifiable data they are trained with and information from the Internet, which may be wrong or outdated. As a result, chatbots can give a distorted view of the political landscape and influence voters with inaccurate information.
Kieskompas and StemWijzer do not give voting advice. They offer information about where political parties stand in relation to each other and which parties best fit the user's preferences. That information follows from a balanced analysis and transparent, verifiable interpretation of positions and election programs.
The AP emphasizes that the shortcomings identified are a result of the way AI chatbots work. Chatbots can make election information more accessible, but are not currently suitable as voting aids.
According to the AP, AI systems that provide voting advice should begin to meet the strict requirements of the AI regulation for high-risk systems, for example, to ensure accuracy and consistency.
With this study, the AP aims to identify early trends around transparency, explainability and preventing arbitrariness in the deployment of AI.
