Around the world, governments are facing the challenges that artificial intelligence (AI) presents. One of the big questions is how to regulate generative AI systems like ChatGPT. The technologies are developing at lightning speed and the interests of the companies behind them are mostly commercial. It is up to the government to keep them in check, researchers Fabian Ferrari, Antal van den Bosch and José van Dijck advise the Ministry of Economic Affairs and Climate in a technical briefing.

According to the government, more than one and a half million Dutch people use generative AI, of which ChatGPT is now the best-known program. The programs not only provide convenience, but also pose risks, warn Ferrari, Van den Bosch and Van Dijck. AI may seem objective at first glance, but it is not. "As a user, you can be exposed to disinformation, discriminatory output and privacy violations. Especially in the absence of rules about the production of the underlying models and codes."
Several Dutch ministries are now joining forces to develop an overarching "vision for generative AI. The UU researchers are advising the Ministry of Economic Affairs and Climate on the basis of an article on the regulation of AI they previously published in Nature Machine Intelligence. "In our briefing we emphasize that policies on generative AI systems should be formulated minutely, with clear transparency obligations. We also made recommendations on how to achieve this precision."
However, AI policy is not only a task for Dutch regulators, Ferrari, Van den Bosch and Van Dijck stress, but also a European challenge. The EU is currently working on the EU Artificial Intelligence Act, which includes transparency requirements. "But the legislature has not yet specified what technical details the companies behind AI must disclose, nor to whom. Independent researchers should have access to the models so they can thoroughly inspect and test them."
"In doing so, it is worth remembering that regulation alone is not a panacea, especially at the national level. A handful of commercial U.S. and Chinese technology companies now dominate the field of generative AI, which is why there must also be investment in open and public alternatives." These alternatives, such as BLOOM, an open-source large language model, already exist and are promising, the researchers say. "Still, these types of AI systems will continue to need government support, especially to compete with the lead of Big Tech companies."
"At both national and EU levels, it is crucial for regulators to keep up with the rapid developments of AI and to regulate the technologies. Collaborations between AI and governance researchers can lead to policy recommendations that would otherwise go unaddressed," Ferrari, Van den Bosch and Van Dijck conclude. "Therefore, for a strong vision for generative AI, we must leave ministerial and disciplinary gaps behind."
