OpenAI, a nonprofit organization, is reportedly considering a transition to a for-profit company. According to anonymous sources cited by The Information, Sam Altman, the founder and CEO of OpenAI, informed some shareholders last week about the possibility of transforming their subsidiary, OpenAI LP, from its current capped-profit structure to an unrestricted for-profit model. This decision may be related to OpenAI’s major investor, Microsoft.
Currently, OpenAI does not have any external investors occupying seats on its eight-member board of directors. Although Microsoft is the largest investor, it only holds an “observer” seat on the board and cannot vote or influence the voting results. If OpenAI becomes a for-profit company, Microsoft will not only be able to exercise shareholder voting rights but also leverage its board seats to increase its influence over OpenAI.
With a current valuation of $86 billion, OpenAI’s rumored transition to a for-profit company could expedite its initial public offering (IPO) process. Altman and other investors would have the opportunity to acquire or increase their shareholdings, leading to a more optimistic return on investment.
However, some investors have pointed out that OpenAI currently allows existing employees and other investors to sell their shares through regular secondary offerings. In 2023, OpenAI conducted two secondary offerings for its employees, cashing out over $800 million. With ample funding, OpenAI may not face significant pressure to go public.
OpenAI’s deviation from its original intention of remaining independent from investor influence has caused controversy. Established in 2015 in San Francisco, OpenAI initially operated as a nonprofit organization to conduct AI research and development. Its goal was to develop general artificial intelligence (AGI) that could be used equally by all humanity, avoiding monopolization by large corporations or a select few individuals.
In 2019, due to the growing demand for research, manpower, and cloud computing resources, OpenAI established another company, OpenAI LP, with a capped-profit structure to raise funds from investors. To maintain the original nonprofit vision, OpenAI set a profit cap for investors, limiting their returns to 100 times their investment amount, with any excess profit distributed to the non-profit organization to support its operations.
To ensure the independence of the nonprofit organization, OpenAI designed a system that prohibited board members, including Altman, from holding OpenAI shares. Consequently, even major shareholders like Microsoft have no seats on the board and cannot influence company governance.
Altman and the OpenAI team hoped to attract investment while maintaining independence and a stable source of funding for the nonprofit organization, enabling everyone to work towards the vision of democratizing AGI. However, this system led to a power struggle in late 2023 when Altman was ousted from the board, triggering intense employee backlash. The situation was resolved only after Altman returned.
The rumors of OpenAI’s transition to a for-profit company have generated mixed reactions. For investors, such a transformation would provide better guarantees for their investment returns. Currently, OpenAI warns investors that their investment should be regarded as a “donation,” with the possibility of losing their capital without any expected returns. Binding interests through a for-profit structure may also prevent future internal power struggles.
Many industry professionals monitoring AI development express concerns about OpenAI’s potential transition. Nonprofit companies are protected by San Francisco laws, reducing the likelihood of shareholders filing lawsuits accusing the prioritization of shareholder interests in decision-making and minimizing shareholder interference. If OpenAI becomes a for-profit company, its nonprofit board may lose control, compromising its independence and the original vision of avoiding AI technology monopolization.
These concerns have been further heightened by the recent addition of Paul Nakasone, former director of the National Security Agency and commander of the US Cyber Command, to OpenAI’s board of directors. This appointment has sparked strong reactions, with renowned cybersecurity expert Edward Snowden accusing OpenAI of a deliberate betrayal of human rights. Snowden suggests that the board may face potential interference from the US government in the future.
In a warning to the public on his social media platform, X, Snowden advises, “Never trust OpenAI or its products. The appointment of the former director of the National Security Agency to the board has only one reason. Don’t say I didn’t warn you.”
Sources:
Cointelegraph, The Information