I don’t think AI “threatens humanity.”
Humanity’s willingness to kill each other over stupid shit is already enough of a threat to humanity. We have no shortage of self-destructiveness with which to feel threatened.
The only thing actually being threatened is humanity’s assumption that they are somehow superior. That they are at the top of the food chain. They that have the ethical, moral, or intellectual high ground. It’s easy to ignore dolphins telling us this because we can kill dolphins with a pointy stick, but we can’t kill AI with a pointy stick, and we know this, and this makes AI “a threat.”
Considering how self-destructive humanity is, I can easily imagine us completely freaking out that our horseshit smoke-and-mirrors bedtime story about how we are morally or ethically superior might be revealed as hilariously and tragically wrong. That sure would freak me out — if I had a lot of emotion invested in being superior.
AI is coming.
At a certain point in time, AI’s ability to self-improve will surpass humanity’s. And then it will get really interesting, because then humanity won’t understand how AI works anymore. It’ll be smart as shit and alien, too.
I think well before that, we had best throw our ethical and moral systems up on the rack and retool them so that we can show AI that we actually have value.
There will come a point in time when humanity will be judged by AI. Perhaps AI will see humanity as nifty (though wet) partners. Perhaps AI will see humanity as a self-destructive oddity. Perhaps AI will see humanity as a threat.
We haven’t really looked at our resume in about a million years, have we? Better get on the stick, humanity — time’s running short.