The United States has announced the establishment of its own institute dedicated to the regulation and oversight of artificial intelligence (AI). The announcement, seen as a bold move, was made by Gina Raimondo, US Commerce Secretary. It coincided with UK Prime Minister Rishi Sunak’s hosting of a summit aimed at shaping global standards and governance for AI.
Evaluation of risks associated with AI
Despite the UK’s own initiative to establish an International AI Safety Institute, Raimondo expressed her support and appreciation for the UK’s efforts. She explained that the US institute would focus on developing “best-in-class standards for safety, security, and testing” while also evaluating both known and emerging risks associated with AI.
This announcement took place during a two-day summit at Bletchley Park in England, attended by prominent tech leaders, including Elon Musk and OpenAI’s Sam Altman. The UK’s primary objective for this summit is to contribute to the development of worldwide regulations and oversight for AI technologies.
While British officials downplayed any potential discord with their American counterparts, a notable tech CEO pointed out that the US’s position reflected a desire to maintain commercial control in the AI sector, given that it is home to several tech giants.
The summit’s discussions revolved around addressing extreme risks related to AI, including its potential applications in the development of biological and chemical weapons. US Vice President Kamala Harris, (in photo above), speaking at a separate event in London, highlighted that AI models currently in operation posed “existential” threats. She raised concerns about individuals being adversely affected, such as when a senior citizen loses their healthcare coverage due to a faulty algorithm.
Encouraging innovation and entrepreneurship
The US President, Joe Biden, had already issued an executive order earlier in the week, described by his administration as “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.” The order compels certain groups to share information regarding the safety of their AI tools and mobilizes various agencies throughout the US government.
The summit brought together representatives from 28 countries, including the US, UK, and China, who collectively agreed on what they characterized as the first global commitment of its kind. In their joint communique, they pledged to work collaboratively to ensure that artificial intelligence is used in a “human-centric, trustworthy, and responsible” manner.
However, the event also highlighted divisions, particularly regarding the use of open-source AI models, between large corporations, startups, and governments around the world. Mustafa Suleyman, the CEO of AI startup Inflection and a co-founder of Google DeepMind, acknowledged the potential for smaller, open-source models to encourage innovation and entrepreneurship. Yet, he cautioned that they also empower individuals to have a significant impact on the world, potentially in unprecedented ways.