In a letter shared exclusively with OpenAI, Warren and Trahan wrote: “Given the discrepancies between your public comments and the reporting of OpenAI’s actions, we are requesting information about OpenAI’s whistleblower and conflict of interest protections in order to Understand whether federal intervention is necessary. edge.
Lawmakers cited several instances where OpenAI’s safety procedures have been questioned. For example, they said that in 2022, an unreleased version of GPT-4 was being tested on a new version of Microsoft’s Bing search engine in India before it was approved by the OpenAI security committee. They also recalled the brief ousting of OpenAI CEO Sam Altman from the company in 2023, in part because the board was concerned about “commercializing progress before understanding the consequences.”
Warren and Trahan’s letter to Altman comes as the company is being dogged by a series of safety issues that are often at odds with its public statements. For example, one anonymous source told Washington post OpenAI was rushed through security tests, the Superalignment team (part of which was responsible for security) was disbanded, and a security executive resigned, claiming that “security culture and processes have given way to shiny products.” OpenAI spokesperson Lindsey Held denied this statement Washington postThe company “has not cut corners on our security processes, although we recognize the stress this release has placed on our teams,” the report said.
Other lawmakers have also sought answers about the company’s security practices, including a group of senators led by Brian Schatz (D-HI) in July. Warren and Trahan asked for further clarity on OpenAI’s response to the group, including establishing a new “integrity hotline” for employees to report concerns.
Meanwhile, OpenAI appears to be on the offensive. In July, the company announced a partnership with Los Alamos National Laboratory to explore how advanced artificial intelligence models can safely aid bioscience research. Just last week, Altman announced via . In the same article, Altman said OpenAI had removed a non-disparagement clause for employees and a clause allowing for the cancellation of vested equity, a key issue in Warren and Trahan’s letter.
Warren and Trahan asked Altman for information about how its new employee AI safety hotline is used and how the company follows up on reports. They also requested “detailed statistics” on all the times OpenAI products “bypassed security protocols” and under what circumstances the products were allowed to skip security reviews. Lawmakers are also seeking information on OpenAI’s conflict policies. They asked Altman whether he was required to divest any outside holdings and “what specific safeguards are in place to protect OpenAI from financial conflicts of interest.” They asked Altman to respond by August 22.
Warren also noted that Altman has been outspoken about his concerns about artificial intelligence. Last year, Altman warned in front of the Senate that AI’s capabilities could “seriously destabilize public safety and national security,” emphasizing that it was impossible to predict every potential misuse or failure of the technology. The warnings appear to be resonating with lawmakers — in California, OpenAI’s home state, state Sen. Scott Wiener is pushing a bill to regulate large language models, including limiting the company’s ability to use Legal liability if used in harmful ways.