Joe Biden might not look like the face of new technology, but he’s seasoned enough to recognize a watershed moment and get out in front of it. The President’s new Executive Order lays out a broad framework for mitigating any harm that might be caused along the path to an AI economy. It calls for a reporting structure to notify government agencies
about any Large Language Model that jeopardizes the security of the military, health or consumers in any way. For example, it calls for standardized tools to assess AI models' safety, and it calls for a method of labeling or watermarking AI content to protect against misuse.
The Department of Commerce would be charged with marshaling the plan. On the heels of the order, Gina Raimondo, the U.S. Secretary of Commerce, announced a new U.S. Artificial Intelligence Safety Institute
(USAISI) that would be housed within the Department of Commerce and specifically underneath the department’s National Institute of Standards and Technology (NIST).
Across the pond, 28 European countries and the EU published a similar Bletchley Declaration Report calling on creators of AI technology to adhere to principles of transparency and accountability.
Of course this call for standards and accountability has its detractors, mainly from free-market proponents who are calling these steps premature and pessimistic.
The good news is that governments world-wide recognize the magnitude of AI’s potential to change the world. For the moment, but not to over-regulate.The best news is that they’re addressing the tidal wave of change that the AI era will usher in, and they’re doing it early and clearly. I liken it to when the Internet was first rushing to the scene; governments took a much more wait-and-see route back then. After decades of Internet and subsequent social media foment, it’s commendable that governments are taking a “we’ve already seen this rodeo” approach, proactively laying out a framework to ensure that no humans are hurt in the making of this new
AI-culture.
|