Autotel

Paradox of AI 4: Finding meaning in a tech utopia

by by Ghostshrimp and Nick Jennings

[Original article]

If the entity or entities of artificial intelligence were to become the sustainer of human life, and also our best friends, how will we, humans find the meaning of being within such a context? We need to provide the new species with the ability to provide humans with a meaningful existence as well.

Some of the dystopian visions of AI consist on the extinction or obsolescence of humans that confront machines (Asimov’s I, robot, or The Matrix movie). I think that a tech dystopian-utopia regards to a conflict of meaning between the presence of machines, and the place of humans; because we may succeed on making machines to take care of us, but maybe we will not be able to cope with the fact of becoming eternal children of our creation. After some three generations from a favourable intelligence explosion, the creation of AI will become disassociated to us. We will no longer be the creators of the AI, as AI starts re-creating itself; thus it will become a redesigned natural environment, one that care of us actively.

After the intelligence explosion, we will be unable to take our own decisions; but not because we started living in a totalitarian state. Imagine you get involved in a romantic relationship with someone, and marry him. Now imagine that this person can see the future, and has an omniscient knowledge of what decisions you should take in order to be the happiest: you become trapped out from your ability to decide; it becomes immoral and stupid to chose otherwise. Sometimes it is very overwhelming to take decisions in front of uncertainty, but perfect certainty somehow seem to hinder liberty, and that has a deep impact on the perception of oneself, because one of the inherent characteristics of being human, the liberty starts being out of place.

There is an obvious lack of sense behind work, as well, in a tech-utopia. In a sense all work becomes either play, learn or art. I think that these three concepts merge in such a scenario, because the only place for human decision making or creation activity becomes an activity that is done by choice, only because of the intrinsic enjoyment of making it. To play is no longer different than to work (we start playing very meticulous games such as Minecraft), and the result of a creation work can only be art, because it cannot serve a direct practical purpose, but only an self-expressive purpose. Although playing, working and art-making merges, this single new entity explodes in many other new concepts such as game-play, make-believe play, social play, toy-play, etcetera. In this sense, there will be need to create a new universe that is safe from machine presence, in order that humans can unveil their humanity in this virtual context. But the decisions remain in an expressive plane. If you deceived humans to get them to believe that they form part of a normal, challenging world (as in The Matrix movie), the purpose of artificial intelligence would get defeated. Perhaps it would be possible that this new world is challenging as in a game, but not mortal as in our current world. However it is done, there needs to be a deliberate effort towards designing the new natural ecosystem.

In terms of inter-personal relations, I can’t think why it could be fundamentally changed by a tech-utopia. Humans relate in terms of love; by establishing many different types of love relations with other persons. Relations of love such as friendship, romantic love, respect, admiration, hatred, etcetera. Because in terms of loving a person, the subject is not replaceable. The love is oriented toward the identity of the person that is loved. There would be a change in the way we inter-relate, in case that artificial intelligence could be merged with human biologic brains, and produce a disembodiment of identity. Humans could want to disembody their consciousness, to be able to live in a better, simulated world; but if this disembodiment of identity is also a destruction or diffusion of the person’s identity, it will feel like a suicide, which is not humanly desirable. This is why inter-personal relations will be the remainder of what it means to be human; although it will happen with and among artificial beings as well.