On Feb. 7, OECD launched framework on Hiroshima code of conduct.
OECD launched the first global framework for companies to report on their efforts to promote safe, secure, and trustworthy artificial intelligence (AI).
Monitors application of Hiroshima process international code of conduct for organisations developing advanced AI systems, key component of Hiroshima process.
Follows G-7 Oct. 2024 issued leader statement on Hiroshima AI process, see #189638.
Document dated Feb. 7, 2025, received from OECD Feb. 10, summarized on Feb. 13.
Overview of Framework
Companies will be able to provide comparable info on AI risk management practices
Such as risk assessment, incident reporting and information sharing mechanisms, fostering trust and accountability in development of advanced AI systems.
World’s largest developers of advanced AI systems have contributed to this initiative and were instrumental in its pilot phase, testing its features, ensuring its effectiveness.
By aligning the framework with many risk management systems, OECD aims to promote interoperability/consistency across international AI governance mechanisms.
Effectiveness
Organizations developing advanced AI systems are invited to submit their inaugural reports by Apr. 15, 2025, after which submissions are accepted on a rolling basis.
Reporting organizations are welcome to update their reports annually.