Markus Anderljung, director of policy and research at the Centre for the Governance of AI, wrote on X that he participated as one of the vice chairs on the paper.
"We've got many more drafts to go until May 2025, I'd be keen on folks' input on how it can be improved," Anderljung wrote. His research focuses on AI regulation, compute governance, and other topics, but there were many others.
The news comes at a time when OpenAI, Google, Microsoft, Anthropic, Perplexity and many other AI-based technologies are on the verge of moving advertising to the next level of performance and measurement.
The EU's final document will play an important role in guiding the future development and deployment of trustworthy and safe general-purpose AI models in a variety of industries, including advertising.
When complete, it will detail transparency and copyright-related rules for providers of general-purpose AI models.
This could pose a challenge for a small number of providers of the most advanced models, the document states.
The code will detail a list of risks such as risk assessment measures, as well as technical and governance mitigation measures.
The risks include offensive cybersecurity risks such as discrimination, an inability to control the autonomous general-purpose AI and automated use of models for AI research and development, any manipulation like disinformation and misinformation that could pose risks to democratic processes or lead to a loss of trust in media.