In a significant announcement, Miles Brundage, OpenAI’s senior adviser for AGI (artificial general intelligence) readiness, revealed his decision to leave the organization. His statement, made public on Wednesday, delivered a sobering assessment: he believes that neither OpenAI nor any other leading AI research lab is adequately prepared for the advent of AGI, a form of AI that matches or exceeds human intelligence.
Brundage, who dedicated six years to shaping the company’s approach to AI safety, emphasized the urgency of this issue. He noted, “Neither OpenAI nor any other frontier lab is ready [for AGI], and the world is also not ready.” This assertion, he explained, is widely acknowledged among OpenAI’s leadership. However, he distinguished between the current lack of readiness and the question of whether the company and the world could reach a state of preparedness in time.
His departure is part of a broader trend at OpenAI, where several key figures in the safety domain have recently resigned. Notable exits include Jan Leike, a prominent researcher who voiced concerns about a diminishing focus on safety protocols in favor of more commercially appealing products, and Ilya Sutskever, a co-founder who has embarked on establishing his own AI venture aimed at ensuring safe AGI development.
Brundage’s exit coincides with the dismantling of his “AGI Readiness” team, which followed the earlier disbandment of the “Superalignment” team that focused on mitigating long-term AI risks. These changes highlight growing tensions between OpenAI’s foundational mission of advancing AI safety and its increasing commercial aspirations. Reports suggest that the company faces mounting pressure to transition from a nonprofit model to a for-profit public benefit corporation within a two-year timeframe. This shift is essential to avoid returning funds from a substantial $6.6 billion investment secured recently, raising concerns among those prioritizing ethical AI development.
Brundage cited constraints on his ability to conduct research and publish findings as a factor in his decision to leave. He underscored the importance of maintaining independent perspectives in the discourse surrounding AI policy, free from potential industry biases and conflicts of interest. He believes that, from outside the organization, he will be better positioned to influence global AI governance.
This situation reflects a deeper cultural divide within OpenAI. Many researchers joined the organization with the intent to push the boundaries of AI research, but they now find themselves in an increasingly commercialized atmosphere. Tensions regarding resource allocation have surfaced, exemplified by claims that Leike’s team was denied essential computing resources for safety research, leading to its eventual disbandment.
Despite these challenges, Brundage noted that OpenAI has offered to support his future endeavors with resources such as funding, API credits, and early access to models, without any conditions attached. This offer underscores a complex relationship between personal missions for AI safety and the corporate imperatives driving OpenAI’s strategic direction.