OpenAI, Anthropic to showcase their powerful AI to US Govt before releasing publicly: Know all the details
3 months ago | 44 Views
OpenAI, Anthropic Partner With US AI Safety Institute: If you have been reading about generative artificial intelligence and how rapidly it is improving, you would know that many experts and industrialists have voiced their concerns about how this may prove to be a danger to humanity. Now, to take corrective steps in this direction, two of the leading AI companies, OpenAI and Anthropic, will now let the US government have access to all the major AI models they develop before public release. This is being done to ensure their safety moving forward.
OpenAI, Anthropic Partner with US AI Safety Institute
OpenAI CEO, Sam Altman, took to X (formerly Twitter) to announce the same. 'We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,' Altman said..
He added that OpenAI considers it important that this happens at the national level. 'US needs to continue to lead!' he further said.
Simply put, the US government will be able to work together with AI companies to mitigate potential safety risks that the advanced AI models might bring with them, and then provide feedback.
"Safe, trustworthy AI is crucial for the technology's positive impact. Our collaboration with the US AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment," said Jack Clark, Co-Founder of Anthropic, and Head of Policy.
What is the US AI Safety Institute?
The US AI Safety Institute is a part of the US Department of Commerce's National Institute of Standards and Technology (NIST). It is a relatively new institution, created by the Biden administration last year to address the risks of AI. Moving forward, it will also partner with the UK government's AI Safety Institute to help AI companies ensure safety.
Commenting on the new found partnership with OpenAI and Anthropic, Elizabeth Kelly, director of the U.S. AI Safety Institute, said, "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.?
#