This initiative, supported by 49 countries and regions, primarily consisting of OECD members, aims to advance cooperation for global access to safe, secure, and trustworthy generative artificial intelligence (AI). The group supported the implementation of international guidelines and codes of conduct as stipulated in the Hiroshima AI Process Comprehensive Policy Framework (Comprehensive Framework). Endorsed by the G7 Digital and Tech Ministers on December 1, 2023, the Comprehensive Framework was the first policy package the democratic leaders of the G7 have agreed upon to effectively steward the principles of human-centered AI design, safeguard individual rights, and enhance systems of trust. The framework sends a promising signal of international alignment on responsible development of AI—a momentum that only increases with the Hiroshima AI Process Friends Group support and involvement. Notably, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (HCOC), established within the Comprehensive Framework, builds upon and aligns closely with existing policies in all G7 nations.
However, as the G7 has stated that the principles are living documents, there is vast potential yet to be realized, as well as remarkable questions lying ahead: How does the Hiroshima AI Process (HAIP) contribute to achieving interoperability of international rules on advanced AI models? How can it add value beyond other international collaborations on AI governance, such as the Bletchley Declaration by Countries Attending the AI Safety Summit? How can the G7, as a democratic referent, leverage its position as a leading advocate for responsible AI to encourage broader adoption of its governance principles, even in regions with differing political or cultural contexts?
To answer these questions, this report (1) provides a brief overview of the history of AI governance and relevant instances of international cooperation; (2) analyzes the structure and content of the HAIP, with specific focus on the HCOC; (3) examines how the HCOC fits into the international tapestry of AI governance, particularly within the context of G7 nations, and how it can foster regulatory interoperability on advanced AI systems; and (4) identifies and discusses prospective areas of focus for the future development of the HCOC.
AI Governance: A Historical Overview and International Initiatives
A Short History of AI Governance
Following the deep-learning breakthroughs of the early 2010s, AI adoption surged across a myriad of industries and sectors. This rapid integration process brought to light a multitude of potential risks. From fatal accidents involving autonomous vehicles to discriminatory hiring practices by AI algorithms, the real-world consequences of AI development have become increasingly evident. Furthermore, manipulation of financial markets by algorithmic trading and the spread of misinformation on social media platforms highlight the broader societal concerns surrounding AI.
Fueled by growing awareness of AI risks in the mid-2010s, national governments (including G7 members), international organizations, tech companies, and nonprofits launched a wave of policy and principle publications. Prominent examples include the European Union’s 2019 Ethics Guidelines for Trustworthy AI, the Recommendation of the Council on Artificial Intelligence by the OECD in 2019 (updated in 2024), and the Recommendation on the Ethics of Artificial Intelligence by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2021. These publications emphasized pairing AI development with core values such as human rights, democracy, and sustainability as well as key principles including fairness, privacy, safety, security, transparency, and accountability.