OpenAI CEO Sam Altman outlined examples of “scary AI” to Fox News Digital after he served as a witness for a Senate subcommittee hearing on potential regulations on artificial intelligence.
“Sure,” Altman said when asked by Fox News Digital to provide an example of “scary AI.” “An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary.”
“These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important.”
Altman appeared before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on Tuesday morning to speak with lawmakers about how to best regulate the technology. Altman’s OpenAI is the artificial intelligence lab behind ChatGPT, which was released last year.
Altman said during the Senate hearing that his greatest fear as OpenAI develops artificial intelligence is that it causes major harmful disruptions for people.
Sam Altman, CEO of OpenAI, takes his seat before the start of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law Subcommittee hearing on “Oversight of A.I.: Rules for Artificial Intelligence” on Tuesday, May 16, 2023. (Bill Clark/CQ-Roll Call, Inc via Getty Images)
“My worst fears are that we cause significant — we, the field, the technology industry — cause significant harm to the world,” Altman said. “I think that could happen in a lot of different ways. It’s why we started the company.”
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” he said. “But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind. And this means that U.S. leadership is critical.”
OpenAI CEO Sam Altman speak with the press following providing testimony before a Senate subcommittee. (FOX NEWS )
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman added.