OpenAI, the artificial intelligence company, has outlined a framework for addressing safety concerns in its most advanced models. The plan, published on its website on Monday, includes provisions such as granting the board the authority to overturn safety decisions.
OpenAI, supported by Microsoft, will implement its latest technology exclusively in areas deemed safe, particularly in fields like cybersecurity and nuclear threats. The organization is in the process of establishing an advisory group responsible for scrutinizing safety reports and subsequently presenting them to the company’s executives and board. Decision-making authority rests with the executives, although the board retains the power to overturn those decisions.
Since the launch of ChatGPT, the potential risks of AI have been a major concern for both AI researchers and the general public. While generative AI technology has impressed users with its capacity to compose poetry and essays, it has also raised safety concerns due to its potential for disseminating disinformation and manipulating humans.
During April, a coalition of AI industry leaders and experts endorsed an open letter urging a six-month hiatus in the advancement of systems surpassing the capabilities of OpenAI’s GPT-4, expressing concerns about potential societal risks.