The First Rule Of Regulating AI Is To Demonstrate A Problem Exists

Earlier this week, OpenAI—creator of the popular internet chatbot, ChatGPT—put out a blog post titled, “Governance of superintelligence.” The post apparently aimed to spell out a delineation between how the company thinks public policy should respond to run-of-the-mill AIs, such as online chatbots and image generators, versus “superintelligent” AIs, which are the yet-to-be-created AI systems of the future that will potentially exceed expert skill levels across most domains of human activity.
According to the authors of the post—one of whom is OpenAI CEO Sam Altman—AI models “below a significant capacity threshold” should be allowed to develop absent “burdensome mechanisms like licenses or audits.” However, for more powerful systems, the authors envision a different kind of regulatory regime, calling for “an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.”
While the authors undoubtedly mean well by their advice, their recommendations do not follow from their premises. Rather, they put the metaphorical cart before the horse, jumping to a foregone conclusion—in this case, the creation of an international AI agency modeled after the International Atomic Energy Agency—without offering the evidence needed to support how they reach it.

[Read More…]