President Joe Biden characterized the new agreement the White House entered into with seven leading artificial intelligence companies as “real and concrete” as his administration works to advance formal bipartisan legislation and a complementary executive order on AI.
“Today, I’m pleased to announce that the seven companies have agreed voluntary commitments for responsible innovation,” he told the press on Friday. “These commitments, which companies will implement immediately, underscore three fundamental principles: safety, security and trust.”
Among those commitments are efforts to design with safety as an inherent feature of advanced AI products and to protect civil liberties. Biden noted that society is going to see more innovation in technology within the next decade than there has been in the past 50 years, and referenced the harm emerging technologies — such as social media — could wreak without regulation.
“The group here will be critical in shepherding that innovation with responsibility and safety-by design-to earn the trust of Americans,” Biden said. “We must be clear-eyed and vigilant about the threats from emerging technologies that can pose — don’t have to — but can pose to our democracy and our values.”
He added that formal bipartisan regulation will still be needed to set mandatory policy despite recent executive endeavors, such as the AI Framework developed by White House Office of Science and Technology Policy released in late 2022.
Some industry experts, however, see the voluntary nature of the commitments as lackluster.
“We know that self-policing is not sufficient, and this voluntary pledge is only made by seven large leading AI companies. There are many more AI companies across the country and globe that will continue to develop and implement AI tools without any oversight whatsoever,” said Collin R. Walke, head of law firm Hall Estill’s Cybersecurity and Data Privacy practice and former Oklahoma state representative, in a statement.
He added that legislative action must be taken, as “the longer Congress sits on the sidelines, the more risks these technologies will pose.”
Several lawmakers are eager to push new AI regulation forward. Sen. Mark Warner, D-Va., supported Biden’s agreement with several Big Tech companies absent regulatory law.
“We must continue to ensure these systems, which are already being adopted and integrated into broader IT systems in areas as wide-ranging as consumer finance and critical infrastructure, are safe, secure, and trustworthy — including through consumer-facing commitments and rules,” Warner said in a statement. “We also need some degree of regulation.”
Across the aisle, Sen. Todd Young, R-Ind., said earlier this week at a Punchbowl News event in Washington, D.C. that he wants Congress to bring a “light touch regulatory initiative” to laws that would govern AI.
“We’re in the midst of a pretty exciting effort,” Young said regarding Congress’s efforts to draft regulations. He added that the Senate’s primary goal will be adapting laws and developing new regulations to “adjust to this AI-permeated world,” focusing on problematic components like bias in algorithms and promoting transparency that doesn’t compromise trade secrets.
Young said that he anticipates legislation on the issue to come out of the Senate within the next six months.