New AI Regulations Set High Bar for Oversight, Sparking Debate Over Innovation and Security Risks
September 5, 2024President Biden's executive order and California's AI legislation establish a threshold for AI models that require reporting to the U.S. government, specifically targeting those performing at least 10 to the 26th floating-point operations per second (flops).
U.S. regulators are focusing on this specific metric to determine which AI systems necessitate oversight, as it indicates a level of computing power that could lead to significant security risks.
This threshold, equating to 100 septillion calculations per second, raises alarms about the potential for AI technologies to create weapons of mass destruction or conduct catastrophic cyberattacks.
However, some tech leaders express concerns that these metrics may not adequately reflect the risks posed by AI, potentially stifling innovation in the startup sector.
Experts like physicist Anthony Aguirre acknowledge that while floating-point operations provide a basic measure of AI capability, they oversimplify the intricacies of AI systems.
Experts emphasize the need for regulatory flexibility as AI technology evolves, cautioning against a hands-off approach that could overlook the rapid advancements in AI capabilities.
Critics argue that the thresholds are arbitrary and may hinder the development of beneficial AI technologies, suggesting that they oversimplify the complexities of AI capabilities.
In contrast, the European Union's AI Act sets a lower threshold of 10 to the 25th flops, which could impact existing AI systems, while China is also exploring similar regulatory measures.
Debate continues among AI researchers regarding the best methods to evaluate AI capabilities and their potential risks, with many viewing current metrics like flops as insufficient.
Moreover, policy researchers warn that the flops metric may soon become outdated as AI developers discover ways to achieve more with less computing power.
California State Senator Scott Wiener defends the legislation, asserting that it aims to exclude models unlikely to cause critical harm, viewing the current metrics as temporary and subject to future adjustments.
Additionally, California's proposed AI safety bill stipulates that regulated AI models must also cost at least $100 million to build, further complicating compliance.
Summary based on 5 sources
Get a daily email with more World News stories
Sources
PBS News • Sep 4, 2024
Regulators turn to math to determine when AI is powerful enough to be dangerousYahoo Finance • Sep 4, 2024
How do you know when AI is powerful enough to be dangerous? Regulators try to do the mathThe Seattle Times • Sep 4, 2024
How do you know when AI is powerful enough to be dangerous? Regulators try to do the math