Top Names Join Biden in AI Safety Group, Including OpenAI, Microsoft, Google, Apple, and Amazon

0

[ad_1]

Four months after issuing an executive order demanding that artificial intelligence be built and used safely, the Biden Administration today announced the launch of the U.S. AI Safety Institute Consortium (AISIC). The new consortium boasts more than 200 representatives, including top-tier AI rivals Amazon, Google, Apple, Anthropic, Microsoft, OpenAI, and NVIDIA.

The consortium brings together AI developers, academics, government and industry researchers, civil society organizations, and users united in “the development and deployment of safe and trustworthy artificial intelligence.”

“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem,” Commerce Secretary Gina Raimondo said in a statement. “That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

She explained that the consortium comes out of the Executive Order U.S. President Biden signed in October. That order included developing guidelines for evaluating AI models, risk management, safety, and security and applying watermarks to AI-generated content.

“We will ensure America is at the front of the pack,” Raimando asserted. “By working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

Joining the consortium are representatives from the healthcare, academia, worker unions, and banking sectors, including JP Morgan, Citigroup, Carnegie Mellon University, Bank of America, Ohio State University, and the Georgia Tech Research Institute, as well as state and local government representatives.

International partners are also expected to collaborate.

“The consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety,” the Commerce Department said. “The consortium… will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world.”

The list of participating firms is so extensive, it may be more useful to note which companies did not join. Among the top ten tech companies that are not represented are Tesla, Oracle, and Broadcom. TSMC is also not listed, but is also not a U.S.-based company.

The rapid spread of generative AI tools into the mainstream has led to countless instances of misuse and a surge of AI-generated deepfakes online. World leaders—including President Biden and former President Donald Trump—have been the target of many of these fake images. On Thursday, the U.S. Federal Communications Commission announced that AI-generated robocalls using deepfake voices are illegal in the United States.

“The rise of these types of calls has escalated during the last few years as this technology now has the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members,” the FCC said.

Since the launch of GPT-4 early last year, world leaders have grappled with how to reign in AI development. Last May, the Biden Administration met with several AI and tech companies, many of which are now part of the consortium. OpenAI, Google, Microsoft, Nvidia, Anthropic, Hugging Face, IBM, Stability AI, Amazon, Meta, and Inflection all signed a pledge to develop AI responsibly.

“None of us can get AI right on our own,” Kent Walker, Google’s President of Global Affairs, previously said. “We’re pleased to be joining other leading AI companies in endorsing these commitments, and we pledge to continue working together by sharing information and best practices.”

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.