Autotel

Paradox of AI 2: Artificial slaves

Black Mirror. Picture source: https://en.wikipedia.org/wiki/White_Christmas_(Black_Mirror)

[Original post]

When it comes to automation of our daily lives; there are many dreams where humans need not to work any more, because machines would have automatized all our work; and so we would not need to work any more, but to care only about relating to our most loved ones, and enjoy pleasures. I think that it is not hard to find people who think of this reality more like a dystopia rather than as an utopia. The fact that we are still advancing towards making the brain-android reinforces the fact that we are no longer in control of our progress as humans. I think that in the current time it is easy to get scared from being replaced in our ability to work, because in our current time we build our identity around what work we do. I think that it is very important for us, that we feel that we have a potential to make a difference in the life of other humans. If we see that our capacity to do so is surpassed by a matter of [scale], we could have a hard time coping with our own existence.

Bostrom stated that the problem with AI is that it is a massive optimization machine, and thus a misfortune and very likely command could lead us to catastrophic results. [ here ] He follows a line of thought similar to Asimov, where machines have a very rigid mean of working, that probably was due to Asimov expecting the artificial intelligence to emerge from our current binary computers; and so commands or programs become very straight vectors. The machine solutions become extrapolations of our ideas, that later could victimize us. Neural networks development have led me to think that maybe brain-androids will not be extreme optimization machines, but actually extreme genius human psychopaths. I think that at certain point, it will be possible to mount a system that will be able to design solutions for us upon requests. Things like providing all the available information about cancer, and expecting the machine to come up with a solution that will eliminate the cancerous cells without compromising the human life that sustains it. But maybe these questions are a but more complex, again, in Asimov terms. How will the machine understand what it means to keep the life of the host? wouldn’t the machine, for instance consider as part of live, the cancerous cells? What if the process compromises deconstructing the person entirely, and then re-creating it? does it still count as a solution? Machines obviously need to learn a lot more than we will be able to provide as part of our questions or problems. Machines need to form part of society in order to understand what we mean by a human life. This is because we actually don’t understand these questions but in an intuitive way, and so we can only provide with this information to a machine in a less controlled way. But then, if machines need to form part of society, and be able to empathize with humans and such, wouldn’t we be creating just hand-made human slaves? Take into account that in comparison with the artificial peers, humans will not ever be a match to their machine peers. What kind of relation will then be established between humans and machine citizens? I can imagine that the relation will soon become the one that happens between responsible adults, and children. Biological humans will become eternal children in the eyes of the robots, and it will be meaningless trying to change the course of the future any more, as we used to be able. If things go very well, we will rather be kept. Perhaps we will have the ability to request things to our parents, the robots, and they will give them to us if they estimate that it is convenient.