OpenAI Appoints Former NSA Chief Nakasone to Board

OpenAI announced Thursday it appointed retired U.S. Army Gen. Paul Nakasone, former head of the National Security Agency and U.S. Cyber Command in the Trump and Biden administrations, to its board of directors.

The company behind the popular chatbot ChatGPT said in a news release it brought in Nakasone because of his extensive background in cybersecurity to safeguard its cutting-edge technology.

Nakasone will be a part of the Safety and Security Committee, a specialized group tasked with making crucial decisions related to safety and security at OpenAI. In May, OpenAI formed the committee to monitor concerns as the Microsoft-backed company begins training its next AI model.

Bret Taylor, chair of OpenAI’s board, emphasized the significance of Nakasone’s addition, stating,

“Artificial Intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed,” Taylor said in the news release. “Gen. Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission.”

Nakasone said OpenAI’s dedication to its mission “aligns closely with my own values and experience in public service.”

“I look forward to contributing to OpenAI’s efforts to ensure artificial general intelligence is safe and beneficial to people around the world,” he said.

The appointment of a former high-ranking military officer with a background in cybersecurity comes shortly after 13 current and former employees of top AI companies OpenAI, Google DeepMind, and Anthropic penned an open letter cautioning the world about the lack of oversight within the AI industry. The six current employees, all with OpenAI, signed anonymously.

Although the letter cited human extinction as a potential hazard, its immediate purpose was to ensure protection for whistleblowers. The authors acknowledged the great benefits AI could provide humanity if properly regulated but warned that AI companies have “strong financial incentives to avoid effective oversight” and do not have faith in the current corporate structure to initiate the needed change.

The letter warned that AI companies are acutely aware of what is possible and what isn’t with their systems yet noted they have only “weak obligations to share some of this information with governments, and none with civil society.”

© 2024 Newsmax. All rights reserved.