Esben Kran

Co-founder @ Apart Research
848 karmaJoined Working (0-5 years)San Francisco, CA, USAkran.ai

Bio

🔶

Comments
60

Topic contributions
1

As an EA who is profoundly disappointed at the level of Bay Area EA collaboration with the labs, I must agree. As a technical researcher, I work with them when it improves safety, but I'm a big proponent of Dan Hendrycks when he says that regulation shouldn't be pleasing for companies since they are the antagonists to any regulation in this field.

And every time I meet policy professionals in the field, they are infinitely scared to ever say extinction risk, pause, or anything like this (happy to hear retorts to this from other insiders), which is incredibly critical. In that sense, it becomes a power-seeking move, but once you are in power, if you've never had an opinion on your way to the top, no one will listen to you.

Very relevant to mention that there are definitely some EAs that are honest and pragmatic, but it's too rare.

Looking forward to be proven wrong but this seems like a profoundly misguided strategy.

Yep, probably agree with this. Then it's definitely good to lead a promising researcher away from the bad nichés and into the better ones!

Great reasoning! If you haven't already, would include a consideration for yourself of how much you think you would 1) contribute to others' impact (inspiring donation %?) and 2) how much it'd improve your own (new career, new projects, new donation opportunities discovered) in the equation. These events are well-funded for generally pretty good reasons :)

Yeah, makes a lot of sense! I think of mid-tier not as offensive since it's also just about Gwern spending all his time on writing vs. Kat Woods running an organization as well - huge respect to both of course for what they do.

Great post, hadn't seen that one before.

I'll also mention that I don't think SoTA philosophy happens in any way within any of the areas that Luke mentions. If this is classified as academic philosophy, then that's definitely fair. But if you look at where philosophy is developed the most (outside of imaginary parallel worlds) in my eyes, it's the summaries of academic work on consciousness (The Conscious Mind), computer science (Gödel, Escher, Bach), AI (Superintelligence), genetic foundations for morals (Blueprint for Civilization), empirical studies of human behavior in moral scenarios (Thinking, Fast and Slow), politics (Expert Political Judgment), cognitive enhancement (Tools for Thought), and neuroscience (The Brain from Inside Out), all of which have academic centres of excellence that are very inspiring.

Like, the place philosophers who truly want to understand the philosophical underpinnings of reality go today looks very very different than it did during the renaissance, in the sense that we now have instruments and mathematics that can measure ethics, morals, and the fundamental properties of reality.

But then I guess you end up with something like Kat Woods vs. Uri Hasson or something like that, and that's not a comparison I'd necessarily make. And separately, what Yann lacks in holistic reasoning, he does make up for with the technical work he's done (though he definitely peaked in '96).

The same for philosophy. What are some examples where theory of philosophy on the forums is significantly better than e.g. the best book on the topic of that year? I can totally buy this, but my philosophy studies during cognitive science were definitely extremely high quality and much better than most work on the forums.

Then of course add that EAs are also present in academia and maybe the picture gets more muddled.

Thanks for the overview! I agree with decorrelating this movement for a few reasons:

  • EA's critique culture has destroyed innovation in the field and is often the reason that a potentially impactful project doesn't exist or is super neutered. Focus on empowering each other towards moral ambition here is great.
  • The name Effective Altruism is very academic and unrelatable for most people discovering it for the first time. And the same is true for its community. It's rare that the community you enter when you enter EA is action-oriented, innovative, and dynamic.
  • EA has indeed been hit by a few truckloads of controversy recently which is good to try to give other options for down the line.

On another note, just noticed the reference to Scandinavian EAs and wanted to give my quick take:

This varies locally, my impression is that it's more common in the Bay Area or Oxford. Scandinavian EAs, for example, are often content doing the 5th most impactul thing they could be doing, celebrating the gains they've made by not just doing some random thing. This is highly anecdotal. 

I think the Copenhagen EAs have consistently been chasing the most impactful thing out there but it is true that the bets have been somewhat decorrelated from other EA projects. E.g., Danes now run Upstream Policy, ControlAI's governance, Apart Research, Snake Anti-Venom, Seldon, Screwworm Free Future, among others, all of which have ToCs that are slightly different from core EA but that I personally think are more impactful than most other projects in their category per dollar.

I'm uncertain where the "5th most impactful" thing comes from here, and I may just be under-informed about our neighbors.

Great post and I agree! Curious about one point:

> 6. Academics often prefer writing papers to blog posts. Papers can seem more prestigious and don't get annoying negative comments. To the degree that prestige is directly valuable this is useful, but for most things I prefer blog posts / Facebook posts. I think there are a bunch of "mid-tier" LessWrong/ EA Forum writers who I value much dramatically than many (far more prestigious) academics.

What are examples of comparisons between far more prestigious academics and mid-tier LW/EAF writers? Curious about what the baselines here are because it's definitely a bit harder for me to make this comparison.

You can get a subsidized free ticket if you apply for it :-)

To me, it's an interesting decision to pull funding because of this type of coverage. There's a tendency in AIS lobbying to never say what we actually mean to "get in the right rooms" but then when we want to say the thing that matters at a pivotal time, nobody will listen because we got there by being quiet.

Buckling under the pressure of the biggest lobby to ever exist (tech) putting out one or two hit pieces is really unfortunate. Same arguments can be made for why UK AISI and the AI Safety Summits didn't become even bigger; simply that there was no will to continue the lobbying move and everyone was too afraid of reputation.

Happy to hear alternative perspectives, of course.

I will mention that an explicit goal with the research hackathon community server we run is that there's no to little interaction between hackathons since people should be out in the world doing direct work. 

For us, this means that we invite them into our research lab or they continue work other places, instead of being addicted. So rather than optimizing for engagement, optimize for information input / action output ratio when visiting.

Load more
OSZAR »