OpenAI’s head of trust and safety is stepping down. He will be replaced by the company’s chief operating officer, who is also its chief security officer.

OpenAI’s head of trust and safety is stepping down. He will be replaced by the company’s chief operating officer, who is also its chief security officer.

Dave Willner who has led the artificial intelligence firms trust and safety team since February said in a LinkedIn post that he is leaving OpenAI as an employee and transitioning into an advisory role to spend more time with his family. Willners exit comes at a crucial moment for OpenAI. Since the viral success of the companys AI chatbot ChatGPT late last year OpenAI has faced growing scrutiny from lawmakers regulators and the public over the safety of its products and their potential implications for society. OpenAI was among seven leading AI companies that on Friday made voluntary commitments agreed to by the White House meant to make AI systems and products safer and more trustworthy. As part of the pledge the companies agreed to put new AI systems through outside testing before they are publicly released and to clearly label AIgenerated content. OpenAIs Chief Technology Officer Mira Murati will become theTrust and safety teams interim manager and Willner will advise the team through the end of this year. We are seeking a technicallyskilled lead to advance our mission focusing on the design development and implementation of systems that ensure the safe use and scalable growth of our technology the company said in the statement. The company said that his work has been foundational in operationalizing our commitment to the safe and responsible use of ourTechnology. The White House announced on Friday that it would require AI companies to put their systems through testing before publicly releasing them. The companies agreed that they would also clearly labelAIgenerated content and to make it easier for consumers to tell if it is from a source that is accurate or not. The move is part of a larger White House initiative to force companies to be more transparent about the source of their AI technology and how it is being used. The U.S. Department of Justice is investigating the use of artificial intelligence in elections and other public services. The Department of Homeland Security is also looking into the potential for AI to be used to manipulate voters and target disinformation in the United States in the coming years. In March OpenAI CEO Sam Altman called for AI regulation during a Senate panel hearing in March. He told lawmakers that the potential to use AI to use disinformation in elections was among my areas of greatest concern especially because were going to face an election next year and these models are getting better. In his Thursday post Willner noted that OpenAI is going through a highintensity phase in its development and that his role had grown dramatically in its scope and scale since I first joined. He said his role has paved the way for future progress in this field and that the company is looking for a ‘technologicallyskilled lead’ to advance its mission. In a statement from OpenAI about Willner’s exit said that he has been a key figure in the development of the company’s technology and that he will continue to be an advisor to the team until he retires at end of the year. The firm said that Willner has been an integral part of OpenAI’s success in the last year and will remain an advisor until he steps down.

khawajausman

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Read also x