Effective Altruism / SBF / OpenAI-Safety Funding Network (2014–23)
Introduction
Between roughly 2014 and 2023, a dense web of funding, personnel, and ideology connected the Effective Altruism (EA) movement, AI-safety research organisations, and the two most prominent commercial AI labs in the world. Open Philanthropy — the philanthropic vehicle of Dustin Moskovitz and Cari Tuna — provided hundreds of millions of dollars to organizations including OpenAI, Anthropic, MIRI (Machine Intelligence Research Institute), and the Centre for Effective Altruism. FTX founder Sam Bankman-Fried (SBF) publicly identified as an EA adherent and in 2022 committed the FTX Future Fund to over $1 billion in grants, the bulk aimed at AI-safety and EA-adjacent causes, before FTX collapsed in November 2022 and the Future Fund dissolved.
The conspiracy framing asks whether this network amounted to coordinated ideological capture of AI governance — a small, like-minded, Bay Area rationalist community positioning itself as the gatekeeper of civilisation-scale decisions about artificial intelligence.
The Network: What Is Documented
Open Philanthropy is the most important institutional node. Founded in 2012 by Moskovitz (Facebook co-founder) and managed by Holden Karnofsky and then Alexander Berger, it made grants exceeding $100 million to OpenAI in 2017 and funded Anthropic, MIRI, the Future of Humanity Institute (FHI) at Oxford (founded by Nick Bostrom), and numerous EA movement-building organisations. The personnel overlap is substantial: Karnofsky married Dario Amodei's sister; multiple OpenAI and Anthropic employees held 80,000 Hours career connections.
The Centre for Effective Altruism (CEA), 80,000 Hours, and the Global Priorities Institute (Oxford) form the EA infrastructure. FHI under Bostrom published the academic work (including Superintelligence, 2014) that provided the intellectual foundation for existential-risk-focused AI-safety work — the same framing that shapes OpenAI's and Anthropic's stated missions.
FTX Future Fund and SBF
In 2022, SBF publicly announced that FTX and the FTX Future Fund would commit over $1 billion to EA and AI-safety causes. The Future Fund made tens of millions in grants before FTX's November 2022 collapse. SBF was simultaneously making large political donations — reportedly over $100 million across the 2022 cycle — which some critics characterised as politically motivated reputation-laundering to forestall regulatory scrutiny of crypto. He was convicted of fraud and related charges in November 2023.
The collapse of FTX caused significant disruption to EA-funded organisations that had received or were expecting Future Fund grants. It also generated intense scrutiny of the EA movement's ethics and internal culture.
Helen Toner and the OpenAI Board
In November 2023, OpenAI CEO Sam Altman was briefly fired by the OpenAI board before being reinstated days later following a staff revolt. Helen Toner, a board member with Georgetown CSET (Center for Security and Emerging Technology) and Open Philanthropy connections, was identified in subsequent reporting as a driver of the board action. A paper she co-authored had criticised OpenAI's safety communications. The episode illustrated real tensions between EA-aligned board members and commercial leadership — and led to a board reconstitution that reduced EA influence. Gideon Lewis-Kraus profiled the EA-AI nexus for The New Yorker in March 2023, providing the most detailed journalistic account of the network.
The Conspiracy Framing
The claim is not simply that an EA funding network existed — it did, and it is well-documented. The conspiracy framing holds that this constituted:
- Coordinated ideological capture: a small community intentionally placing its members in positions of influence over AI governance to advance a specific worldview (long-termism, existential risk prioritisation) that privileges far-future speculative harms over present harms.
- Regulatory arbitrage: using AI-safety framing and political donations to shape regulation in ways that benefit incumbents and freeze out competition.
- SBF as conduit: the claim that FTX donations to politicians were not altruistic but were designed to purchase regulatory forbearance for crypto while FTX operated fraudulently.
What the Evidence Shows
The network existed. The funding flows are documented in 990 filings, grant databases, and journalism. The personnel overlaps are real. The intellectual influence of EA/longtermism on OpenAI's and Anthropic's stated missions is real and self-acknowledged. The SBF fraud is a matter of criminal conviction.
What is disputed is whether the network constituted intentional coordinated capture rather than an organic clustering of like-minded people around shared ideas in a small geographic and intellectual community. The Bay Area rationalist community is genuinely small; the overlap between EA, AI-safety, and the leading AI labs reflects shared intellectual origins more than active conspiracy.
Verdict
Confirmed as a real, well-documented funding and influence network. The framing of intentional ideological "capture" is contested; the network's existence, scope, and effects on AI governance are matters of documented public record, not speculation.
What Would Modify Our Assessment
- Internal communications showing explicit coordination to exclude competing AI-safety frameworks
- Evidence that Open Philanthropy funding came with ideological conditions attached to grantees' governance positions
- Documentary evidence that SBF political donations were coordinated with EA leadership as a regulatory strategy
Evidence Filters8
Open Philanthropy $100M+ grant to OpenAI documented in 990 filings
SupportingStrongOpen Philanthropy made a grant exceeding $100 million to OpenAI in 2017, documented in IRS Form 990 filings. This is the single largest philanthropic transfer in AI-safety history and directly connects EA's primary funder to the most influential AI lab.
FTX Future Fund committed $1B+ to EA/AI-safety causes before collapse
SupportingStrongThe FTX Future Fund publicly announced over $1 billion in grant commitments in 2022, primarily to EA and AI-safety organisations. The commitments dissolved when FTX collapsed in November 2022. The scale of intended influence was documented before the fraud was exposed.
SBF convicted of fraud and related charges, November 2023
SupportingStrongSam Bankman-Fried was convicted on all seven counts of fraud, conspiracy, and money laundering in November 2023. The conviction establishes that FTX funds used for EA donations were obtained through fraud, though it does not retroactively invalidate the influence those donations had.
Personnel overlap: Karnofsky, Amodei family connection, 80,000 Hours alumni at AI labs
SupportingOpen Philanthropy CEO Holden Karnofsky is married to Daniela Amodei's sister; Daniela Amodei is Anthropic's President. Multiple senior employees at OpenAI and Anthropic have documented connections to 80,000 Hours career advising. The overlap is real and documented in public bios.
Helen Toner OpenAI board departure tied to EA/safety tensions — New Yorker reporting
SupportingGideon Lewis-Kraus's March 2023 New Yorker profile documented EA-safety tensions within OpenAI governance. The November 2023 Altman firing and board reconstitution reduced EA-aligned representation on the board. Toner had co-authored a paper critical of OpenAI's safety communications.
Organic intellectual clustering vs. intentional coordination — distinction matters
NeutralCritics of the "capture" framing note that the Bay Area rationalist/EA/AI-safety community is geographically and intellectually small. Shared ideas and funding naturally cluster around shared institutions without requiring active conspiratorial coordination. The network may reflect intellectual genealogy rather than intent.
Rebuttal
The distinction between organic clustering and intentional coordination is real but not dispositive. Influence networks do not require explicit coordination to shape governance outcomes. The documented funding flows and personnel overlaps have real effects regardless of whether they were consciously orchestrated.
EA funding has also supported causes outside AI — global health, animal welfare
DebunkingWeakOpen Philanthropy and EA-aligned funders have committed billions to global health interventions, malaria prevention, and animal welfare causes that have no AI governance angle. This breadth argues against a single-minded AI-capture agenda.
Rebuttal
Diverse funding portfolios are consistent with broad EA methodology. The existence of global health grants does not preclude the simultaneous pursuit of AI governance influence through the AI-safety funding stream.
FHI / Bostrom provided academic foundation for existential-risk AI framing adopted by labs
SupportingStrongNick Bostrom's Future of Humanity Institute at Oxford (funded in part by Open Philanthropy) published *Superintelligence* (2014), which provided the intellectual framework for treating AI as an existential risk. OpenAI's and Anthropic's stated missions directly reference existential-risk language derived from this tradition.
Evidence Cited by Believers6
Open Philanthropy $100M+ grant to OpenAI documented in 990 filings
SupportingStrongOpen Philanthropy made a grant exceeding $100 million to OpenAI in 2017, documented in IRS Form 990 filings. This is the single largest philanthropic transfer in AI-safety history and directly connects EA's primary funder to the most influential AI lab.
FTX Future Fund committed $1B+ to EA/AI-safety causes before collapse
SupportingStrongThe FTX Future Fund publicly announced over $1 billion in grant commitments in 2022, primarily to EA and AI-safety organisations. The commitments dissolved when FTX collapsed in November 2022. The scale of intended influence was documented before the fraud was exposed.
SBF convicted of fraud and related charges, November 2023
SupportingStrongSam Bankman-Fried was convicted on all seven counts of fraud, conspiracy, and money laundering in November 2023. The conviction establishes that FTX funds used for EA donations were obtained through fraud, though it does not retroactively invalidate the influence those donations had.
Personnel overlap: Karnofsky, Amodei family connection, 80,000 Hours alumni at AI labs
SupportingOpen Philanthropy CEO Holden Karnofsky is married to Daniela Amodei's sister; Daniela Amodei is Anthropic's President. Multiple senior employees at OpenAI and Anthropic have documented connections to 80,000 Hours career advising. The overlap is real and documented in public bios.
Helen Toner OpenAI board departure tied to EA/safety tensions — New Yorker reporting
SupportingGideon Lewis-Kraus's March 2023 New Yorker profile documented EA-safety tensions within OpenAI governance. The November 2023 Altman firing and board reconstitution reduced EA-aligned representation on the board. Toner had co-authored a paper critical of OpenAI's safety communications.
FHI / Bostrom provided academic foundation for existential-risk AI framing adopted by labs
SupportingStrongNick Bostrom's Future of Humanity Institute at Oxford (funded in part by Open Philanthropy) published *Superintelligence* (2014), which provided the intellectual framework for treating AI as an existential risk. OpenAI's and Anthropic's stated missions directly reference existential-risk language derived from this tradition.
Counter-Evidence1
EA funding has also supported causes outside AI — global health, animal welfare
DebunkingWeakOpen Philanthropy and EA-aligned funders have committed billions to global health interventions, malaria prevention, and animal welfare causes that have no AI governance angle. This breadth argues against a single-minded AI-capture agenda.
Rebuttal
Diverse funding portfolios are consistent with broad EA methodology. The existence of global health grants does not preclude the simultaneous pursuit of AI governance influence through the AI-safety funding stream.
Neutral / Ambiguous1
Organic intellectual clustering vs. intentional coordination — distinction matters
NeutralCritics of the "capture" framing note that the Bay Area rationalist/EA/AI-safety community is geographically and intellectually small. Shared ideas and funding naturally cluster around shared institutions without requiring active conspiratorial coordination. The network may reflect intellectual genealogy rather than intent.
Rebuttal
The distinction between organic clustering and intentional coordination is real but not dispositive. Influence networks do not require explicit coordination to shape governance outcomes. The documented funding flows and personnel overlaps have real effects regardless of whether they were consciously orchestrated.
Timeline
Open Philanthropy grants $100M+ to OpenAI
Open Philanthropy, the philanthropic vehicle of EA-aligned billionaire Dustin Moskovitz, makes a landmark grant exceeding $100 million to OpenAI, the leading AI research lab. The grant is the largest in AI-safety history and establishes a direct financial link between EA funding and the most influential commercial AI lab.
Source →FTX Future Fund launches with $1B+ commitment
Sam Bankman-Fried's FTX Future Fund publicly commits over $1 billion in grants to EA and AI-safety causes, making FTX the second-largest EA funder behind Open Philanthropy. The Future Fund disburses tens of millions before FTX collapses.
FTX collapses; Future Fund dissolves; SBF arrested
FTX files for bankruptcy on 11 November 2022. The FTX Future Fund immediately dissolves, leaving dozens of grantee organisations scrambling for alternative funding. SBF is arrested in December 2022 and charged with fraud, conspiracy, and money laundering.
Source →Sam Altman fired then reinstated at OpenAI; Helen Toner departs board
OpenAI board fires CEO Sam Altman on 17 November 2023. The action, in which board member Helen Toner — with Open Philanthropy connections — played a central role, triggers a staff revolt and reversal within days. A reconstituted board reduces EA-aligned representation. The episode illustrates and then partially resolves the EA-safety tension at the heart of OpenAI governance.
Verdict
The EA / OpenAI / SBF funding network is well-documented in 990 filings, grant databases, and journalism. Open Philanthropy funded OpenAI ($100M+ in 2017), Anthropic, MIRI, and EA infrastructure. FTX Future Fund committed $1B+ in 2022 before collapsing. SBF convicted of fraud November 2023. Helen Toner OpenAI board departure tied to EA-safety tensions. The network existed; whether it constituted intentional ideological capture of AI governance is the contested interpretive question.
Frequently Asked Questions
Did Effective Altruism take over OpenAI and Anthropic?
The EA movement did not "take over" the labs in a conspiratorial sense, but the funding and personnel connections are real and documented. Open Philanthropy made over $100 million in grants to OpenAI. Multiple senior employees at both OpenAI and Anthropic have EA connections. The intellectual framework of both labs — existential risk from AI — derives directly from EA-adjacent academic work. Whether this constitutes capture or organic intellectual alignment is disputed.
Was SBF's EA giving genuine or regulatory cover?
SBF publicly identified as an EA adherent and the FTX Future Fund made real grants to EA and AI-safety organisations before FTX collapsed. Whether the political donations — reportedly over $100 million in the 2022 cycle — were genuine EA-motivated giving or strategic regulatory arbitrage is a question the fraud conviction does not definitively answer. Both motives may have been present simultaneously.
What happened to Helen Toner and the OpenAI board?
Helen Toner, an Open Philanthropy-connected board member, was a central figure in the November 2023 decision to fire Sam Altman. After Altman's reinstatement, the board was reconstituted with reduced EA-aligned representation. Toner and other EA-connected board members departed. The episode illustrates real governance tensions between EA-safety priorities and commercial AI lab leadership.
Is it bad that EA funds AI-safety research?
Sources
Show 3 more sources
Further Reading
- articleThe reluctant prophet of effective altruism — Gideon Lewis-Kraus (2023)
- bookSuperintelligence: Paths, Dangers, Strategies — Nick Bostrom (2014)
- articleOpen Philanthropy grants database — AI safety funding history — Open Philanthropy (2024)