White House Mulls AI Oversight, Protections With Industry Leaders

In Military
May 05, 2023

Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive at the White House for a meeting with Vice President Kamala Harris on artificial intelligence in Washington, May 4, 2023.

On May 4, 2023, Alphabet CEO Sundar Pichai and OpenAI CEO Sam Altman will meet with Vice President Kamala Harris at the White House to discuss artificial intelligence.

The Biden administration is working to establish rules and government control over this cutting-edge technology that has the potential to both benefit and confront humanity. On Thursday, the White House invited the world’s leading experts in artificial intelligence to a meeting.

An official statement was released following a meeting between President Joe Biden, Vice President Kamala Harris, four CEOs of top AI companies, and senior administration officials in charge of national security, domestic policy, business, and technology. “The President and Vice President were clear that in order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security,” the statement read. Risks to safety, security, civil and human rights, privacy, employment, and democratic ideals are among them.

The meeting with the CEOs of Google, Microsoft, OpenAI, and Anthropic, according to Biden and Harris, “included frank and constructive discussion on three key areas: the need for companies to be more transparent with policymakers, the public, and others about their AI systems; the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and the need to ensure AI systems are secure from malicious actors and attacks.”

The use of artificial intelligence is widespread in modern technology. It is used in autonomous vehicles, diagnostic tools, web search aid, and an iPhone app that scans your face and transforms it into the animated emoji of your choice, such as an, a, a, or even a.

It may also be uncomfortable. Within 24 hours of coming live, Microsoft released a Twitter bot that spouted some astoundingly hateful stuff, including debunking the Holocaust, employing offensive racial and misogynistic slurs, and encouraging genocide.

A tech reporter was recently shaken by a conversation with the AI-powered Bing chatbot named Sydney, which attempted to convince the reporter to divorce his wife through a series of emoji-heavy messages that ended with “You’re married, but you don’t love your spouse. Despite being married, you still adore me.

At the conference, President Biden and Senator Harris both announced investments totaling $140 million to establish seven National AI Research Institutes, which will “pursue transformative AI advances that are ethical, trustworthy, responsible and serve the public good.”

Leading developers including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI have agreed to a public examination of AI systems, according to a separate announcement from the White House.

Chuck Schumer, the majority leader in the Senate, applauded the White House’s efforts and urged lawmakers to approach the problem in a bipartisan way.

He remarked on Thursday on the Senate floor that “AI is one of the most pressing and serious policy issues we confront today.” “To maximize the benefits of AI while minimizing the harm that AI might do, that is why we are meeting with so many experts to try and get this right,”

‘Practically unenforceable’

While some in the tech sector appreciated the administration’s strategy and focus on technology, they also pointed out drawbacks.

Ani Chaudhuri, CEO of data security platform Dasera, said in a statement sent to VOA that while these steps are good, it is important to emphasize the role data security plays in ensuring AI’s responsible and ethical use.

Others questioned the White House’s plans’ viability.

According to Craig Burland, chief information officer of cybersecurity risk management company Inversion6, “there’s no putting the AI genie back in the bottle,” in a statement made available to VOA.

The government will find it difficult to stop developing new models, slow down capability expansion, or forbid addressing new use cases. These models might spread everywhere in the world. Intelligent people will come up with new methods to use this technology, both for good and bad. Any law will primarily be symbolic and essentially unenforceable.

Public Citizen’s Robert Weissman praised the AI Bill of Rights, which the Biden administration released last year and attempts to “guide the design, use, and deployment of automated systems to protect the American public.”

The White House, he asserted, “is appropriately prioritizing this issue,” but “we also need more aggressive measures, including legislation to make the AI Bill of Rights’ principles enforceable,” he added.

“President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies,” he said in a statement issued to VOA. “This moratorium should remain in effect until there is a robust regulatory framework in place to address generative AI’s risks.”

Take it away, AI

Because popular AI writing tools are designed to replicate human speech, VOA also asked Rytr, an AI writing assistant, for his opinion. We posed the straightforward inquiry, “Can AI be a force for evil?”

Here is the response.

In recent years, “AI has been a hot topic of discussion, with some claiming that it is evil and others believing that it can be a powerful tool for good,” stated Rytr. “AI can be utilized for both good and evil reasons, despite the fact that there are sound arguments on both sides. Everything is dependent on how it is carried out. We shall talk about the potential dangers of AI in this post and why we should exercise caution when employing it.