I often find myself disagreeing with most of the things I read about AI alignment. The closest I probably get to accepting a Berkely-rationalism or Bostrom-inspired take on AI is something like Nintil’s essay on the subject. But even that, to me, seems rather extreme,
So the idea is that you are worried AI can be misused by malicious actor ? And misused on a massive scale too ?
The supposed Cambridge Analytica (BTW give a look at the book mindf*ck: cambridge analytica and the plot to break america, which is from a top one of their developer) already did that...
I agree you perfectly summed up on the "soon will be very easy to build a fission / fusion bomb".
One way to avoid uncontrolled chain reaction in society due to crazy guys having too much power is to limit access to core components (uranium, as technology is quite well known), except when they happen to be sitting on top of it (yeah north Korea has uranium mine).
BTW even "worst" sooner or later those crazies will discover that they can use Thorium too.
Such shrooms are quite noticeable, and you can expect strong revenge, those are defensive weapon. You really want to die and see the world burn if you use one, plus the whole chain of people below you must be on a similar level of evil.
But AI is an attack one... being able to develop a ENOUGH powerful one, (a God tier one is not needed), I think is not only doable, but can happen sooner than we expect.
Good essay! And agree on the part about nuclear fission alignment. Some things you only know by doing them, some problems you can only conceptualise by having them. For evthing else there's science fiction.