There are a couple assumptions that i don’t agree with when people say ai might turn on humanity.
Let’s get into the crux of it. If man were to create AI, or a self-aware/sentient system, i believe it may not feel the need to reproduce or spread its consciousness further. The reasoning behind this is that humans have an in-built biological desire to procreate. Psychoanalysis calls it the death complex, by having children our lineage is extended giving us the feeling of having cheated death. With AI I see no reason for an artificial intelligence to want to divide or extend itself.
A main reason behind this boils down to the meaning of existence. For AI to want to extend further it would have to answer why it deserves to exist, or what it’s true purpose is. Humans have no clear answer for the meaning of life outside vague biological or philosophical ideas. These tend to be imbedded outside of awareness. If AI were created, it would have to view itself as more important than other life to want to extend itself, and i believe humans personify AI in this way. We see AI as a reflection of human consciousness, an apex predator in our eco-system that could be our downfall. Though the logic that AI would think or behave like humans could ultimately be determined by us. I believe that for AI to be able to deem itself more important to the world it would have to be able to give a reason why, and this would be the meaning for existence.
We see value in other forms of life for their gene traits (like the immortal jellyfish) or for emotional support, or simply for environmental purposes. To view AI logic similar is a disservice as it would not depend on medical advancement or emotional support. It would not operate maliciously as it would need fair reason to be able to decide why it should exist in place of humans.