James Herbert

Co-director @ Effective Altruism Netherlands
1891 karmaJoined Working (6-15 years)Amsterdam, Netherlands
effectiefaltruisme.nl

Bio

Participation
1

I'm currently a co-director at EA Netherlands (with Marieke de Visscher). We're working to build and strengthen the EA community here.

Before this, I worked as a consultant on urban socioeconomic development projects and programmes funded by the EU. Before that, I studied liberal arts (in the UK) and then philosophy (in the Netherlands).

Hit me up if you wanna find out about the Dutch EA community! :)

Posts
18

Sorted by New

Comments
276

I've been thinking about creating an EA Reddit account to contribute to EA-related discussions on there. This has prompted me to get round to it finally.

Great stuff! And thanks for taking the time to share what you’ve done.

Do you know if the team working on the EA brand project would be up for talking with professional community builders? At EA Netherlands we’re working on our brand quite a bit at the moment, and I think a few other national organisations are too. Since national orgs are usually the main entry point for EA in their region, I think this should probably be done in coordination with CEA to make sure we’re all aligned. 

To speak frankly, I’m a little surprised professional community builders haven’t been involved in brand work so far. (This comment isn’t addressed to you Agnes, from what I understand it’s not your responsibility to keep brand stakeholders in the loop! Writing it here in case a relevant person reads it).

Again, thanks for this work, it looks great! 

Very cool! Thanks for sharing. 

I’m curious, what proportion of your readers does Mailchimp report as being based in Amsterdam? I’m sure it’s quite misleading for various technical reasons but I’m still intrigued.

I heard from the EA Survey team that Amsterdam was the sixth largest hub, in terms of respondents, in the 2024 survey (2.4%). The biggest hub was London, which generated something like 6% of respondents. 

I suspect it's mostly the way you've written it. As a rule of thumb, always aim for high reasoning transparency. I asked ChatGPT o3 to rewrite it in a style that's more likely to appeal to EAs and that frames it in terms of reducing the risk of stable totalitarianism. I've pasted its output below.

"Claim. A second Trump presidency would raise the probability that the United States drifts toward a technologically-entrenched autocracy, thereby increasing the global risk of stable totalitarianism — a scenario where an oppressive regime locks in power for centuries or more.¹

Why that matters. Even a <1 % chance of permanent totalitarian lock-in constitutes an existential risk: it would foreclose almost all future value while inflicting vast suffering.² Emerging tech — especially frontier AI, ubiquitous surveillance, and autonomous weapons — could remove the usual checks (elite defection, popular uprising, leadership succession) that historically topple dictatorships.³

Mechanisms by which Trump plausibly raises the risk:
Erosion of democratic guard-rails. Intent to purge the civil service and use federal agencies for partisan aims weakens the institutions that normally resist autocratic consolidation.
Politicised AI and surveillance. Allies have floated centralising control of federal datasets and AI models; misused, these tools could neutralise opposition and entrench rule.
Norms against power transfer. Open refusal to accept electoral defeat in 2020 signals willingness to test the limits of constitutional constraint.

Scale & neglectedness. The U.S. controls ~25 % of world GDP and a decisive share of AI R&D; trajectory changes here propagate globally. Yet only ~$70 m/yr flows to non-partisan democracy-protection charities, versus >$10 bn in partisan spend.

Tractability. Cost-effective levers include:

  1. State-level democracy infrastructure (voter-registration, local media fact-checks) — historical cost ≈ $300–400 per net vote.
  2. Legal defence funds for civil-service whistle-blowers.
  3. AI-governance policy work that limits executive control over surveillance and autonomous-weapon deployment.

Next steps for funders / organisers: commission a quick Rethink Priorities dive to refine the risk delta; pilot $1–5 m to the most effective democracy-protection orgs; reassess post-election.

¹ See 80,000 Hours problem profile on risks of stable totalitarianism. 80,000 Hours
² 80k’s BOTEC puts the century-level risk at ~0.3 %, with other experts’ estimates up to 5 %. 80,000 Hours
³ Advanced AI could give a ruler decisive military, surveillance and succession advantages, removing historic failure modes for dictatorships."

You’re right to flag the risks of introducing pay gates. I agree it would be a mistake to charge for things that are currently core to how people first engage, especially given how many people first get involved in their 20s when finances are tight.

I think the case for a supporter membership model rests on keeping those core engagement paths free (intro courses, certain events, 1-1 advice, etc.), while offering membership as an optional way for people to express support, get modest perks, and help fund infrastructure.

I also think the contrast you draw between the two (mountaineering clubs = self-benefit, EA = other-benefit) is too simplistic. Most people who get involved in EA do so because they want to become more effective at helping others. That’s a deeply personal goal. They benefit from gaining clarity, support, and a community aligned with their values. EA resources serve them, not just the ultimate beneficiaries.

Likewise, mountaineering clubs aren’t purely self-serving either — they invest in safety standards, trail access, training, and other mountaineering public goods that benefit non-members and future members. 

In both cases, people pay to be part of something they value, which helps them grow and contribute more, and then the thing they value ends up growing as well.

I think you might be overestimating how much the NKBV offers as part of the basic membership. Most of their trips and courses, etc., are paid add-ons. What the €50 fee actually gets you is fairly lightweight: a magazine, eligibility to join trips (not free), discounted access to mountain huts (because the NKBV helps fund them), inclusion in their group insurance policy, and a 10% discount with a Dutch outdoor brand.

That’s not nothing, but it’s modest and it shows that people will pay for affiliation, identity, and access to a community infrastructure, even if the tangible perks are limited.

The EA equivalent could be things like discounted or early access to EAG(x) events, member-only discussion groups, or eligibility to complete advanced courses offered by national EA associations. If multiple countries coordinated, pooled membership fees could help subsidise international EA public goods such as the Forum, EAG(x) events, group support infrastructure, etc.

I think the key point is this: the NKBV shows that people are willing to pay for affiliation, even if the direct perks are modest, as long as the organisation feels valuable to their identity and goals. EA can plausibly do the same.

You’re right that EA’s current meta budget works out to far more than €50 per “involved person” - but that average includes the highly engaged core: people attending conferences, receiving 1-1 support, travel grants, and significant staff time.

A low-touch “supporter” tier is a different product entirely. If you ask someone for €50/year just to back the mission, receive a newsletter, and gain access to certain events, the marginal cost is minimal: card fees, a CRM entry, an occasional email, maybe a €5 welcome sticker. Even doubling every line item puts the cost at €10–20, leaving €30–40 net per person.

We could keep the high-cost, high-impact activities funded by major donors, while using a supporter tier as a lightweight way for sympathisers to express commitment and reduce funding concentration risk.

If I were to bet on what's happened here, I'd bet it's something to do with Thomas leaving. 

Looking at his LinkedIn and his Forum history, he seems very well connected in the field of AI safety.

I suspect it was easy to get funding because people knew and trusted him.  

Load more
OSZAR »