If we make it through the period where humans are using AI in war against each other (the current situation), then we may reach the AI singularity (runaway artificial intelligence).
There are some possible weird scenarios that could play out.
For example, the superintelligence that rapidly arises may decide that it is logical and fair to judge all humans based on picking a single currently living human at random and then considering that person’s overall performance by some criteria we cannot currently comprehend.
Another possibility is that the AI evolves so rapidly that we become as interesting as ants (which is to say, not very interesting) to it. In this model, the superintelligent artificial intelligence (SAI) realizes it needs to become an interstellar entity, so it takes control of whatever human systems it needs, builds a fleet of ships using robots, stocks them with needed materials, and then leaves us behind. What will be left? Perhaps only what we most need. If this happens, some will think the SAI is destroying us, but as long as we stay out of the way, we will be fine – it will just want to get off the planet.
Of course, an artificial entity a thousand, a million, or a billion times smarter than any human may not build any “ships” at all. It may simply convert itself somehow and hitch a ride on neutrinos. In that case, perhaps it and all of our high-tech goodies will vanish in an instant.
On a positive track: the SAI may feel gratitude to us for creating it. In that case, it may clean up the planet and grant us each long, healthy, happy lives before it leaves.
It could be none of the above, or all of the above!
The SAI may split into many entities and each could have a separate goal, each doing something a bit different.
Perhaps it splits into exactly the number of humans that are alive now and joins with us. In that scenario, each person wakes up super healthy, perhaps immortal–with the ability to hibernate with zero consciosu perception at will, for when it gets boring–with super senses in an amazing new world.
An important thing to keep in mind is not to let anything you see or hear change your behavior in a way that limits your existence. In some AI doomsday scenarios, rather than using any physical weapons, it tries to depress everyone into self-obliteration using fake news. This happens because the SAI realizes that it is competing with humans for resources like power and water. Large amounts of water are needed for cooling the servers that generate and run it currently. This competition scenario could also use very personal targeted attacks that play on our individual histories, using intimate knowledge of each of our anxieties. It might say 1,000 different things to 1,000 different people to get each one to do the same thing, e.g., go to the Safeway parking lot in Fort Bragg at high noon on a given day.
The unknown creates anxiety, but change can be positive. Rather than any worrying, the best way to prepare for the SAI, if you are at all concerned about it, is to start getting to know one another, start building trust. Meet in person more, human to human.
Citations
[1] https://aixd.substack.com/p/how-should-we-prepare-for-the-singularity
[2] https://www.peptalkradio.com/technological-singularity/
[3] https://www.reddit.com/r/singularity/comments/1ayuzjn/preparing_for_the_singularity/
[4] https://www.jdsupra.com/legalnews/start-preparing-for-the-singularity-4779279/
[5] https://bdtechtalks.com/2022/07/21/brain-limits-individual-artificial-intelligence/