There’s nothing special going on here— higher levels of government prevail if laws at two different levels conflict. The federal government has the right to regulate AI in a way that preempts state level governance: https://www.law.cornell.edu/wex/preemption
We are asking people to tell their Senators not to allow this provision to pass because it means choking out the best hope for AI regulation given Congress’s lack of interest/heavy AI industry lobbying. It’s not offering any federal level regulation— just making it so the states can’t implement any of their own.
There’s nothing special going on here— higher levels of government prevail if laws at two different levels conflict. The federal government has the right to regulate AI in a way that preempts state level governance: https://www.law.cornell.edu/wex/preemption
We are asking people to tell their Senators not to allow this provision to pass because it means choking out the best hope for AI regulation given Congress’s lack of interest/heavy AI industry lobbying. It’s not offering any federal level regulation— just making it so the states can’t implement any of their own.
(FYI I'm the ED of PauseAI US and we have our own website pauseai-us.org)
1. It is on every actor morally to do the right thing by not advancing dangerous capabilities separate from whether everyone else does it, even though everyone pausing and then agreeing to safe development standards is the ideal solution. That's what that language refers to. I'm very careful about taking positions as an org, but, personally, I also think unilateral pauses would make the world safer compared to no pauses by slowing worldwide development. In particular, if the US were to pause capabilities development, our competitors wouldn't have our frontier research to follow/imitate, and it would take other countries longer to generate those insights themselves.
2. "PauseAI NOW" is not just the simplest and best message to coordinate around, it's also an assertion that we are ALREADY in too much danger. You pause FIRST, then sort out the technical details.
Feels like your true objection here is that frontier AI development just isn't that dangerous? Otherwise I don't know how you could be more concerned about the few piddling "inaccuracies and misleading statements that I won't fully enumerate" than nobody doing CAIP's work to get the beginnings of safeguards in place.
Props to @Andres Jimenez Zorrilla 🔸 for dealing with the razzing. Enduring people’s incredulous reactions is an important part of the work and you did a fantastic job being patient and earnest.