Ahead of the upcoming AI Safety Summit in Seoul,
leading AI scientists from institutions like the University of
Oxford are urging world leaders to act on AI risks. Despite the
pledges made at the previous summit in Bletchley Park, the
experts argue that progress has been insufficient. According to
Dr. Jan Brauner, the current AI landscape is dominated by a
relentless pursuit of technological advancement, with safety and
ethics as secondary concerns. This expert consensus paper,
published in Science magazine, stresses that without a focus on
safe development, AI may pose serious risks to society,
especially as the potential for rapid, transformative AI
capabilities looms within the decade.
The authors, including renowned AI figures such as
Geoffrey Hinton and Dawn Song, outline critical priorities for
global AI policy. They recommend establishing well-funded,
expert oversight institutions and highlight the disproportionate
funding gap. In the United States, AI Safety Institute’s budget is
just $ 10 million, in stark contrast to the Food and Drug
Administration’s $ 6.7 billion. They also advocate for mandatory,
rigorous risk assessments and call for enforceable standards on
AI safety, urging AI companies to adopt “safety cases” similar to
those in other high-stakes fields like aviation. These safety cases
would place the responsibility on developers to prove their
technologies pose no harm.
Additionally, the paper proposes “mitigation standards”
that automatically scale according to AI capability milestones.
This approach would ensure rapid responses if AI systems
advance quickly, with policies automatically tightening or
relaxing based on the technology’s pace. As global leaders
prepare for the summit, the experts emphasize that addressing AI
risks now is essential for protecting society from potential harm.
This marks the first consensus from such a broad coalition of
international AI experts, underscoring the urgency for concrete
policy commitments rather than vague proposals.
Internet:<ox.ac.uk> (adapted).