The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies which also include Amazon, Meta, OpenAI, Anthropic and Inflection. The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation. All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity. President Joe Biden will meet with top executives from all seven companies on Friday. Officials are also working with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI. In the meantime, officials hope other companies will immediately begin implementing the voluntary commitments and will see how they also have an obligation to live up to the standards of safety, security and trust of AI. The companies also committed to investing in cybersecurity and insider threat safeguards, in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely. But Common Sense Media, a child internet-safety organization, warned that history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations. The federal government’s failure to regulate social media companies at their inception and the resistance from those companies has loomed large forWhite House officials as they have begun crafting potential AI regulations and executive actions in recent months. The first of which is expected to be unveiled later this summer and will be unveiled by Vice President Kamala Harris and the President’s chief of staff for AI, Julian Zelizer, on June 14. The White House is also working to move beyond voluntary commitments, readying a series. of executive actions, the first of whom is expected. to be released later this year. The commitment was crafted during a monthslong back-and-forth between the AI companies and the. White House that began in May when a group of AI executives came to the White. House to meet with Biden, Vice president Kamala. Harris and White house officials. It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before, says White House Deputy Chief of Staff Bruce Reed. While most of the companies already conduct internal red-teaming exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology such as a cyberattack or its potential to be used by malicious actors and allows companies to proactively identify shortcomings and prevent negative outcomes. Reed said the external red-team will help pave the way for government oversight and regulation, potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser. It’s important for this bridge to regulation to be a sturdy one. The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago.