text
stringlengths
0
1.92k
They stated that they intended to explore how to better use human feedback to train AI systems, and how to safely use AI to incrementally automate alignment research.[148]
In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers.[149][150] OpenAI also planned a restructuring to operate as a for-profit company
This restructuring could grant Altman a stake in the company.[151]
In March 2025, OpenAI made a policy proposal for the Trump administration to preempt pending AI-related state laws with federal laws.[152] According to OpenAI, "This framework would extend the tradition of government receiving learnings and access, where appropriate, in exchange for providing the private sector relief from
the 781 and counting proposed AI-related bills already introduced this year in US states."[153]
In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S
government.[154] This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1.[155][156]
The emergence of DeepSeek has led major Chinese tech firms such as Baidu and others to embrace an open-source strategy, intensifying competition with OpenAI
Altman acknowledged the uncertainty regarding U.S
government approval for AI cooperation with China but emphasized the importance of fostering dialogue between technological leaders in both nations.[157] In response to DeepSeek, OpenAI overhauled its security operations to better guard against industrial espionage, particularly amid allegations that DeepSeek had improperly copied OpenAI's distillation techniques.[158]
Notable products by OpenAI include:
On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him
Chief Technology Officer Mira Murati took over as interim CEO
Greg Brockman, the president of OpenAI, was also removed as chairman of the board[161][162] and resigned from the company's presidency shortly thereafter.[163] Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry [pl], and researcher Szymon Sidor.[164][165]
On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure.[166] Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out.[167] The board members agreed "in principle" to resign if Altman returned.[168] On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO.[169] The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined.[170]
On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events.[171] Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him.[172] About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign.[173][174] This prompted OpenAI investors to consider legal action against the board as well.[175] In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time.[176]
On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining.[177] On November 22, 2023, emerging reports suggested that Sam Altman's dismissal from OpenAI may have been linked to his alleged mishandling of a significant breakthrough in the organization's secretive project codenamed Q*
According to sources within OpenAI, Q* is aimed at developing AI capabilities in logical and mathematical reasoning, and reportedly involves performing math on the level of grade-school students.[178][179][180] Concerns about Altman's response to this development, specifically regarding the discovery's potential safety implications, were reportedly raised with the company's board shortly before Altman's firing.[181] On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations;[182] Microsoft resigned from the board in July 2024.[183]
In January 2023, OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya
These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs
However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence
The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021
The four Sama employees interviewed by Time described themselves as mentally scarred
OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators
Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management.[184]
In March 2023, the company was also criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards
OpenAI cited competitiveness and safety concerns to justify this strategic turn
OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years.[185]
On May 17, 2024, a Vox article reported that OpenAI was asking departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI or acknowledging the existence of the agreement
Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement.[186][187] Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity.[188] Vox published leaked documents and emails challenging this claim.[189] On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement.[190]
OpenAI, Inc
was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person"
In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit
In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit
The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education
OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled.[191]
The plan has been criticized by experts and former employees
A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general.[192] The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits
It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange.[193] PBCs can choose how they balance their mission with profit-making
Controlling shareholders have a large influence on how closely a PBC sticks to its mission.[194][193]
Legally, under nonprofit law, assets dedicated to a charitable purpose must continue to serve that purpose
To change its purpose, OpenAI would have to prove that its current purposes have become unlawful, impossible, impracticable, or wasteful.[195] Elon Musk, who had initiated a lawsuit against OpenAI and Altman in August 2024 alleging the company violated contract provisions by prioritizing profit over its mission, reportedly used this lawsuit to stop the restructuring plan.[194] On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer.[196][197] The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale,[198] but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued.[197]
In May 2025, the nonprofit's board chairman Bret Taylor announced that the nonprofit would renounce plans to cede control after outside pressure
The capped-profit still plans to transition to a PBC,[199] which critics said would diminish the nonprofit's control.[200]
OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023.[201][202][203] In September 2023, 17 authors, including George R
R
Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work.[204][205] The New York Times also sued the company in late December 2023.[202][206] In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books.[207]
In 2021, OpenAI developed a speech recognition tool called Whisper
OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4
The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos
Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman
The resulting dataset proved instrumental in training GPT-4.[208]
In February 2024, The Intercept as well as Raw Story and Alternate Media Inc
filed lawsuit against OpenAI on copyright litigation ground.[209][210] The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI.[211]
On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles
The suing publications included The Mercury News, The Denver Post, The Orange County Register, St
Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News.[212]
In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against Open AI about the Chat GPT service".[213]
In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation
A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process
A request to correct the mistake was denied
Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed.[214]
OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare"
Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare"
Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons".[215][216]
As one of the industry collaborators, OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge (AIxCC), which is sponsored by the Defense Advanced Research Projects Agency (DARPA), and to the Advanced Research Projects Agency for Health.[217] In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft.[218] In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies.[219]
In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S
Army to join Detachment 201 as senior advisor.[220]
In June 2025, the U.S
Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications
OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT.[221][222]
In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker
It was filed in San Francisco, California, by sixteen anonymous plaintiffs.[223] They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models.[224]
On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform
Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models.[225] In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission.[226]
Suchir Balaji, a former researcher at OpenAI, was found dead in his San Francisco apartment on November 26, 2024
Independent investigations carried out by the San Francisco Police Department (SFPD) and the San Francisco Office of the Chief Medical Examiner (OCME) concluded that Balaji shot himself.[227]
The death occurred 34 days after a New York Times interview in which he accused OpenAI of violating copyright law in developing its commercial LLMs, one of which (GPT-4) he had helped engineer
He was also a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in The New York Times's court filings as potentially having documents relevant to the case
The death led to speculation and conspiracy theories suggesting he had been deliberately silenced.[227][228] Elon Musk, Tucker Carlson, California Congressman Ro Khanna, and San Francisco Supervisor Jackie Fielder have publicly echoed Balaji's parents' skepticism and calls for an investigation.[229][230]
In February 2025, the OCME autopsy and SFPD police reports were released
A joint letter from both agencies to the parents' legal team noted that he had purchased the firearm used two years prior to his death, and had recently searched for brain anatomy information on his computer
The letter also highlighted that his apartment's only entrance was dead-bolted from inside with no signs of forced entry.[227]
In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental “share with search engines” feature
The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links
OpenAI announced the feature’s permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks
CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data.[231][232][233]
C++ (/ˈsiː plʌs plʌs/, pronounced "C plus plus" and sometimes abbreviated as CPP or CXX) is a high-level, general-purpose programming language created by Danish computer scientist Bjarne Stroustrup
First released in 1985 as an extension of the C programming language, adding object-oriented (OOP) features, it has since expanded significantly over time adding more OOP and other features; as of 1997[update]/C++98 standardization, C++ has added functional features, in addition to facilities for low-level memory manipulation for systems like microcomputers or to make operating systems like Linux or Windows, and even later came features like generic programming (through the use of templates)
C++ is usually implemented as a compiled language, and many vendors provide C++ compilers, including the Free Software Foundation, LLVM, Microsoft, Intel, Embarcadero, Oracle, and IBM.[14]
C++ was designed with systems programming and embedded, resource-constrained software and large systems in mind, with performance, efficiency, and flexibility of use as its design highlights.[15] C++ has also been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications,[15] including desktop applications, video games, servers (e.g., e-commerce, web search, or databases), and performance-critical applications (e.g., telephone switches or space probes).[16]
C++ is standardized by the International Organization for Standardization (ISO), with the latest standard version ratified and published by ISO in October 2024 as ISO/IEC 14882:2024 (informally known as C++23).[17] The C++ programming language was initially standardized in 1998 as ISO/IEC 14882:1998, which was then amended by the C++03, C++11, C++14, C++17, and C++20 standards
The current C++23 standard supersedes these with new features and an enlarged standard library
Before the initial standardization in 1998, C++ was developed by Stroustrup at Bell Labs since 1979 as an extension of the C language; he wanted an efficient and flexible language similar to C that also provided high-level features for program organization.[18] Since 2012, C++ has been on a three-year release schedule[19] with C++26 as the next planned standard.[20]
Despite its widespread adoption, some notable programmers have criticized the C++ language, including Linus Torvalds,[21] Richard Stallman,[22] Joshua Bloch, Ken Thompson,[23][24][25] and Donald Knuth.[26][27]
In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++.[28] The motivation for creating a new language originated from Stroustrup's experience in programming for his PhD thesis
Stroustrup found that Simula had features that were very helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development
When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing
Remembering his PhD experience, Stroustrup set out to enhance the C language with Simula-like features.[29] C was chosen because it was general-purpose, fast, portable, and widely used
In addition to C and Simula's influences, other languages influenced this new language, including ALGOL 68, Ada, CLU, and ML.[citation needed]