Henry Papadatos
Managing Director, SaferAI

Hi, I’m Henry,
As a technical AI risk management expert, my focus lies in addressing the growing gap between rapid AI advancement and our ability to manage its associated risks.
I am the Managing Director of SaferAI where I work on both AI governance and technical solutions to improve the risk management for frontier AI systems.
Governance
I am contributing to various initiatives in the governance space. I am in an expert working group for the EU AI Act’s Codes of Practice, which focuses on general purpose AI models. My working group is focusing on the risk taxonomy and on risk identification and assessment.
I also took part in the OECD task force in charge of drafting the G7 Hiroshima AI Process reporting framework. Just before the Paris AI action summit, I advocated for international AI safety standards through an op-ed published in TIME. Most recently, I joined the OECD expert group on AI Risk & Accountability to further inform global governance.
Technical research
On the technical front, I developed an AI risk management ratings system for AI developers, which has been featured in TIME and Euractiv. I then published a comprehensive AI risk management framework that integrates current AI practices with proven risk management strategies from other industries.
I am now working on improving risk modeling and quantitative risk assessment for AI-enabled cyber risks. In this effort, I recently published a paper addressing a key challenge in AI risk assessment: connecting empirical measures of AI capabilities to concrete estimates of real-world harms.
Prior to joining SaferAI, I conducted technical research on large language models alignment at the Center for Human-Compatible AI at UC Berkeley.