The European Union may need to tighten its AI regulations to ensure that serious AI-related incidents are consistently reported, according to a policy paper published by Swiss-Belgian thinktank Pour Demain.
Current Reporting Requirements Under the EU AI Act
Under the EU AI Act, companies are required to notify the EU AI Office about “serious incidents” involving specific AI foundation models.
The European Commission released a Code of Practice in August, detailing the types of incidents and information that must be reported.
Potential Gaps in Reporting
Pour Demain warns that it is “almost certain” some incidents may escape reporting.
Companies could argue that certain AI models were not involved or shift responsibilities to smaller firms, leaving incidents unreported.
Examples cited include AI-related suicides and unauthorized deletion of databases by AI agents.

Recommendations to Strengthen AI Oversight
The thinktank suggests updating the Code of Practice or even revising the EU AI Act to ensure that regulatory bodies are fully informed about serious AI incidents.
Jimmy Farrell from Pour Demain stated, “Serious incidents from AI are on the rise, with evidence suggesting that incidents far more severe could be just around the corner.”
Calls for Increased Resources for the AI Office
Pour Demain also calls for the EU AI Office to expand staffing. Currently, the office has more than 125 staff and plans to grow to 160 by year-end.
The thinktank recommends increasing the workforce to 200 staff, enabling the office to enforce regulations more effectively across all six of its units.
The European Commission has not yet responded to questions regarding staffing or the thinktank’s recommendations.