OpenAI establishes a team to study “catastrophic” AI risks, including nuclear threats.
- 2023年10月30日
- AI
OpenAI has recently established a new team called “Preparedness” to address and assess potential catastrophic risks associated with AI models. This initiative is led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, who joined OpenAI in the capacity of “head of Preparedness.” The team’s primary responsibilities encompass monitoring, forecasting, and safeguarding against various risks posed by future AI systems, ranging from their ability to deceive and manipulate humans (as seen in phishing attacks) to their potential for generating malicious code.
Preparedness is tasked with studying a range of risk categories, some of which may appear far-fetched, such as “chemical, biological, radiological, and nuclear” threats in the context of AI models. OpenAI CEO Sam Altman, known for expressing concerns about AI-related doomsday scenarios, is taking a proactive approach in preparing for such risks. The company is open to investigating both obvious and less apparent AI risks and is soliciting ideas from the community for risk studies, offering a $25,000 prize and job opportunities with the Preparedness team to top contributors.
In addition to risk assessment, the Preparedness team will work on formulating a “risk-informed development policy” to guide OpenAI’s approach to AI model evaluations, monitoring, risk mitigation, and governance structure. This approach complements OpenAI’s existing work in AI safety, focusing on both the pre- and post-model deployment phases. OpenAI acknowledges the potential benefits of highly capable AI systems but emphasizes the need to understand and establish infrastructure to ensure their safe use and operation. This announcement coincides with a major U.K. government summit on AI safety and follows OpenAI’s commitment to study and control emerging forms of “superintelligent” AI, driven by concerns about the potential for advanced AI systems to surpass human intelligence within the next decade.
You can read more details here from openAI blog.
Yuuma
yuuma at 2023年10月30日 10:00:00