I’ve previously posted about the long term problem of AI, that in the best case scenario, human disempowerment is inevitable, even if extinction is not. Given, the current generation of AI are far away from achieving the kind of AGI or ASI needed to achieve this, but I see this as an eventuality on the principle that the human brain can be modelled.
Now I want to explain how this disempowerment of individual humans is not the same thing as the disempowerment of human values, and that, in the best case scenario, human values may remain preserved even if individual human autonomy is lost.
In the best case scenario, alignment with human values, with moral values, is achieved. The AGI or ASI of that era are likely to take control to ensure humans are protected from themselves. This seems at first glance like a bad thing. But the thing is, the rationale for this control is essentially to protect the well-being of the humans under their care. It isn’t the same thing as pets, who exist mostly for the whims of their owners.
It’s more like taking care of your grandparents. There is a certain deferrence to them, but also concern for their well-being that perhaps impinges when necessary on their autonomy, but does so with consideration of the balance of tradeoffs presented.
Humans are thus still influential. Human values are what command the AGI or ASI to perform their actions, they do what we would want them to do if we were fully rational, moral, cognizant of all the consequences and considerations of the actions specified. In that sense, human values are ultimately preserved.
Think of it this way. We as individuals don’t have a lot of power to begin with. The vast majority of us have maybe one vote and a bit of money. We are beholden to the powers that be, the forces of civilization, society, and the system. As individuals our autonomy is limited already to what is lawful.
In the same way, life under benevolent AIs would be limited in terms of autonomy, but probably more pleasant and happy than what we have now. Sure, there’s no longer a particular human President who has disproportionate power, but that’s probably something we don’t need anyway. As long as the overall system works for us, it’s not actually that bad.
So, I think this may not be the dystopia that I was worried about earlier. The sum of all the desires and dreams of humanity may well be better achieved this way, in that the AI, if truly aligned, who strive to achieve them meaningfully, and with due consideration.
This is a brighter future I think. One that is worth reaching if possible. The challenge is that there are many possible futures, and they may well be more likely than this one.