Breaking News

OpenAI Establishes Safety Committee for New Model Training

OpenAI Establishes Safety Committee for New Model Training

OpenAI has formed a Safety and Security Committee led by board members, including CEO Sam Altman, as the company begins training its next artificial intelligence model. Board directors Bret Taylor, Adam D’Angelo, and Nicole Seligman will also lead the committee, according to a company blog post. This move comes as OpenAI’s advanced AI chatbots, backed by Microsoft and capable of generating human-like conversations and images from text prompts, have raised safety concerns.

Also Read: Ethical Concerns Arise Over Voice Cloning Following OpenAI’s Apology to Scarlett Johansson

Former Chief Scientist Ilya Sutskever and Jan Leike, who led OpenAI’s Superalignment team, recently departed the company. OpenAI had disbanded the Superalignment team earlier this month, reassigning some members to other groups, according to CNBC. The new committee’s first task is to review and enhance OpenAI’s safety practices over the next 90 days, after which they will present their recommendations to the board. Following the board’s review, OpenAI will publicly share an update on the adopted measures.

The committee also includes newly appointed Chief Scientist Jakub Pachocki and Matt Knight, head of security. OpenAI plans to consult additional experts such as Rob Joyce, former US National Security Agency cybersecurity director, and John Carlin, former Department of Justice official. While OpenAI did not disclose details about the new AI model it is training, it mentioned that the model would significantly advance the company’s capabilities towards achieving Artificial General Intelligence (AGI).

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp