Must Help Electrons

Effective Altruism, the IAEA, and AI

Nick Doiron
6 min readNov 27, 2022

In 2018, I presented at a cybersecurity workshop at the IAEA Safeguards Symposium about the risk of encryption breaking by quantum computers. One of my later musings on this is that if there’s a 1% chance that I contributed 1% to stopping a small nuclear exchange, then probabilistically I saved dozens of lives that day. But the mathematical benefit for probabilistic people is not intuitive, or even constant. The quantum industry is slower than promised. People in the room have forgotten. I can continue to make claims about how important I was to peace… unless a nuclear war does break out, when rambling on about probability wouldn’t go over well in the mineshaft shelter.

2022 was the next convening of the Safeguards group, but I’ve since contributed to peace and diplomacy through the Ploughshares Fund Equity Rises program. After being another white guy at a UN conference, it’s better that our country can send students with qualifications and help them break into this expensive bubble of international conferencing and meet the national nuclear labs at their recruitment sessions.
But strictly in comparison, donating is eating my veggies. Lots of other people are donating by the same process, I don’t get a platform, I don’t know all that much about how it relates to us all not getting nuked.

EA promises to channel money in a utilitarian way, to projects which actually help and benefit humanity. There was a sense (similar to the Gates Foundation’s focus on metrics, or even Bullshit Jobs) that donations and effort were being wasted. The core problem, well deeper than the current reckoning with FTX, was that neither of these types of helping (acting on personal philosophy, or funding a project with direct and abstract impacts) truly have a measurable effectiveness. If you say that you care about long-term impact, all of the factors which you count or not for the future are decisions in your head. Eventually you are right back to the start: funding projects which feel right to you.

I’m sure that EAs have complex diagrams to defend funding museums, larger particle accelerators, AI essays, and pet-friendly hurricane shelters, instead of a focus on lead and malaria. But they should be honest about that being a personal preference.

Why people have a personal preference for longtermism

The mindset that I had after the IAEA conference was partially from my own agency and action, but also from listening to the later points about future of humanity. Longtermers can list several events which would be devastating personally and globally, yet I haven’t spent time protecting my home from gamma ray bursts. It was exciting to meet people who do work on risks, such as the underground nuclear tests guy.

Suppose I decide it’s cool to work on risk, too. The less likely the risks are (asteroid vs. hurricane) the less likely that I ever have my program put to the test. This makes EA work and donations appealing in a rather insidious way, even for an inevitable and existential risk:

  • If Jeff Bezos moves all of his finances to an anti-asteroid laser perpetual trust to launch in 2150, that would maybe make him the greatest EA humanitarian ever?
  • Even with the Bezos Laser Platform under construction, a pandemic or supervolcano could still throw our comfy capitalist environment into chaos. So there’s always more work for EAs and their untested plans.
  • After 2020, there’s been only a few political grumbles over the millions of dollars devoted to pandemic preparedness and national stockpiles of PPE and pox vaccine. So even a complete failure of an anti-risk org might not lead to someone apologizing or having regrets.

An EA / longtermer org which still interests me is ALLFED (Alliance to Feed the Earth in Disasters). Many disaster types have a common problem of food insecurity, and we have a food insecurity problem already as-is. Some of their researched keyed me into Tibetan wheat and landraces, which again is compatible with current seed bank / gene bank research and a book that I’m reading about global wheat. So that’s cool.

EAs and self-awareness

Vitalik Buterin (co-founder of Ethereum) posted his support to keep the EA mindset going. Note the first items of his list (anti-malaria, pandemic response, mitigating extreme poverty) are causes which plague people today! EAs know which objectives are important, useful, and popular.

According to legend, nuclear war experts at RAND did not take retirement. A host at the 80,000 hours podcast expressed sadness at not feeling he was likely to survive the next years, but admitted he still invests, and won’t take a risky loan. This at least says to me that he thinks AI apocalypse is not ‘likely’ but something on the level of a sudden car accident or aneurysm.

The podcast and various EA sites have dense writings about why we can’t allow organ harvesting and other weird hypothetical exceptions about probabilities. Maybe it is a religion after all.

Fringe beliefs in EA which are no longer fringe

In the Twitterverse and in media discourse there’s another trend: some effective altruists are too deep on the philosophy or technology. There are people walking among us who believe that electrons could be experiencing a great magnitude of suffering, it’s worth trying to ‘break out’ of the simulation (or stopping quantum research to keep our simulation less resource-intensive?).
Extreme EA views around AI alignment (learning to follow human-beneficial goals) are more visible and no longer all that uncommon.
I distinctly remember where and when I was reading this post, which circulated online a bit and has hundreds of comments:

So far as I’m concerned, if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I’ll take it
[…]
Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment.
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

If you earnestly believe that ‘alignment’ is worth a billion lives, and that any one person could be certain enough to take that deal, it would be trivial to justify threats and violence to delay one AI or promote another.
As I said earlier, I’m sure that most EAs know that this is a thought experiment and don’t act on it — but if a group is in their own info bubble, or convinces a deranged person (i.e. stochastic terrorism) then eventually for someone it’s not only an idea.

There is also another segment of EAs who establish group housing or post long thoughts specifically about the role of women or Western vs. non-Western societies. That is precisely cult stuff, this is unabomber manifesto stuff, definitely no need in 2022 to incorporate it into a supposedly egalitarian or humanitarian philosophy.

Some thoughts on non-profits in general

The resentment and confusion stewing about Effective Altruism is not alone, among confusion about how donations work in 2022:

  • There’s a tension between Planned Parenthood, the American Red Cross, and the ACLU versus their local chapters and newer nonprofits in their issue-space on donations, efficiency, and sometimes policy.
  • Mutual aid, bail funds, RIP Medical Debt, and GoFundMes all change how people give and receive financial help.
  • People and companies blindly donated to nonprofits named ‘Black Lives Matter’ in controversies now exploited by right wing media.
    The Wikimedia Foundation is failing to give away what they had allocated to social equity.
  • There’s uncertainty about whether social equity funds should go to education and conventional non-profits, or fill in gaps in capital for BIPOC business and VC.
    This factored into ‘ESG’ becoming a popular term in the finance world and then being required or opposed based on political reasons. Are there ESG-heads out there thinking that their companies will heal the world? Still doubtful that it’s more helpful than donating.
  • Much of direct, continuously supplied aid comes from religious groups, which is obvious, but disappearing from the elite / mass media bubble. They’re likely how your city receives migrants and refugees:
the UN’s list of refugee resettlement organizations in the USA
  • Church-affiliated groups also underwrite open tech standards such as SIL International in Unicode, and the LDS Church in genealogy standards.
  • Among young people who work at non-profits, I’ve been shocked by stories of coordinating donations which they discard. I can only guess that donors are on edge about feeling good about donating, and revealing the use or purpose of their donation just makes people mad?
    For example, someone was donating Pepsi to a primate care facility, and they would re-bottle it because the chimps wanted Coca-Cola bottles. They thought I was naive to ask why?
    Or, I gave a presentation to a class about OLPC, and I noted that their Scratch version was incompatible with ours. The message was not passed on. Eventually someone must have lied about it.
  • Do schools still do jump-rope-a-thons and other activities to donate to causes? I think if I understood this and what it teaches kids then I would understand all of the non-profit issues.

--

--