L

Linch

@ EA Funds
27004 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2833

I enjoyed reading this. It's often useful/interesting to build models of what makes someone good at their job, and this particular one is interesting/somewhat surprising to me at parts, even though I know both of you personally and I'm obviously familiar with the forum.

Anything specific that prompted you to comment this now, may I ask? 

Apologies for doubting you! 

Very much of a tangent, but do you have an short explanation for why the shape is likely to be a power-law? I think power laws are relatively rare in nature, and the more common generators of power law distributions (e.g. network effects) don't seem to apply here.

Yeah I don't quite understand that line of argument. Naively, it seems like a bait-and-switch, not unlike "journalists don't write their own terrible headlines." 

Possibly a tangential point, but lots of people in many EA communities think that accelerating economic growth in the US is a top use of funds.

Hmm I think the link does not support your claim. 

Why would value be disributed over some suitable measure of world-states in a way that can be described as a power law specifically (vs some other functional form where the most valuable states are rare)?

I agree with this. I'm probably being too much of a pedant, but it's a slight detriment to our broader epistemic community that people use "power law" as a shorthand for "heavy-tailed distribution" or just "many OOMs of difference between best and worst/median outcomes." I think it makes our thinking a bit less clear when we try to translate back and forth between intuitions and math.

Thanks a lot for this post! I tried addressing this earlier by exploring "extinction" vs "doom" vs "not utopia," but your writing here is clearer, more precise and more detailed. One alternative framing I have for describing the "power laws of value," hypothesis as a contrast of your 14-word summary:

"Utopia" by the lights of one axiology or moral framework might be close to worthless under other moral frameworks, assuming an additive axiology. 

It's 23 words and has more jargon, but I think it describes my own confusions better. In particular, I don't think you need to believe in "weird stuff" to get to many OOMs of difference between "best possible future" and "realistic future", unless additive/linear axiology itself is weird. 

As one simple illustration, humanity can either be correct or incorrect in colonizing the stars with biological bodies instead of digital emulations. Either way, if you're wrong you lose many OOMs of value 

  1. If we decide to go the biological route: biological bodies are much less efficient than digital emulations. it's also much more difficult, as a practical/short-term matter, to colonize stars with bodies, so you capture a smaller fraction of the lightcone.).
  2. If we decide to go the digital route, and it turns out emulations don't have meaningful moral value (eg at the level of fidelity that emulations are seeded on, digital emulations are in practice not conscious), then we lose ~100.0000% of the value.
Linch
3
0
0
29% agree

mostly because of tractability than any other reason

To me, "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.

Further evidence for this view comes from OpenAI's old merge-and-assist clause, which indicates that they'd be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good. 

They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn't viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn't seem necessarily unreasonable under general charitable-law principles to me

I'm confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?

I tried to find the original mission statement. Is the following correct?

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. 

If so, I can see how an OpenAI plantiff can try to argue that "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" necessitates them "winning the AI arms race", but I don't exactly see why an impartial observer should grant them that.

Load more
OSZAR »