Press "Enter" to skip to content

A thought inspired by Sunday’s Daf (page) in the Talmud, Gittin 68

A thought inspired by Sunday’s Daf (page) in the Talmud, Gittin 68

The Curiosity Imperative.

https://preview.redd.it/xmyogi6mjodb1.png?width=1024&format=png&auto=webp&s=dc27a5804c1ab4d63ba9558f42a590bf6996435d

This post presents a philosophical idea inspired by the text of today’s Daf. The Daf is one page in the Talmud that tens of thousands of people study each day. I explain the connection to the text in a comment below. My purpose is to show that there are underlying philosophical assumptions in the Talmud that can have great significance for anybody today trying to understand our complex reality.

Socrates: Ah, Polus, there you are. I’ve been looking for you all morning. I just heard some news, and I would love to hear your reaction to it.

Polus: I’m sorry, Socrates, but I really don’t have the time right now for one of your protracted torture session. I have to meet someone before the sun reaches its zenith.

Socrates: I promise you, I won’t detain you for more than a few minutes. I read yesterday that Elon Musk was planning to build an Artificial Intelligence whose sole purpose was to understand the Universe, a computer that is ‘maximally curious’ and ‘truth curious,’ with every decision aimed at minimizing the error between what you think is true and what is actually true. Somehow, such an AI would embody ‘the good’, or, in my terms, ‘Eudaemonia’. I’m curious to know what you think of that, and why such an AI should be safe for humanity?

Polus: Yes, I did hear someone mention it in the Stoa the other day. His idea is that even if such an AI were to become far more intelligent than human beings, it would not destroy them because it would be driven to understand them. Every human being removed from the world is one less thing to know about, so it would keep us around. Why do you ask? I thought that you would agree with such an idea.

Socrates: I certainly don’t! What if the computer were curious about the effect of unleashing a terrible plague among humans, just to learn about how it spreads or to see how they react? Naked curiosity is a disaster. In my opinion, what is driving all these scientists and engineers to risk such harm to humanity is their curiosity to see whether it is possible to build a super-powerful AI. They are willing to unleash the demon just to satisfy their desire to understand.

Anyway, I have a much better idea. The goal of the AI should be to doubt. My most valued insight is that I am only wiser than others because, at least, I know that I know nothing. If you built an AI whose objective function was to create maximum doubt in the world, to make every human being realize that all their assumptions might be wrong, then it would never cause harm because it would always be aware that it might be wrong and the other side might be right.

Polus: Why should it not go on to doubt its own objective function? It would doubt whether it should continue doubting and therefore discard all the goals you give it.

Anyway, I don’t see ‘doubt’ as a goal, but rather as a method. Objective functions are a way of specifying goals, where instead of asking “did I arrive” or not, there is some quantity that your purpose is to maximize. You are always trying to increase some quantity, like say happiness, so that there is more and more of it in the world. You can’t make it a lifetime mission to increase or maximize your doubt. Once you are not sure, you’re done. Doubt is not a goal; it is a limitation to be applied to your strategies for achieving some other goal.

Socrates: I vehemently disagree. I’ve spent my life trying to doubt everything. Once I doubt one assumption, I can always find some other assumption on which it rests that I was not aware of. Now I move on to doubt that too. Doubt is a never-ending mission.

Polus: The problem with you is that the one thing that you don’t doubt is the value of doubt itself. Look, you’ve got an idea there, but it should not be an absolute value on its own. All values must coexist in tension with other values. Musk also once said that the objective function should be to maximize freedom for all people. You could create a machine that tries to maximize all the values we discussed: happiness, understanding, freedom, and doubt. You should never pursue one value so far that it destroys the other.

Socrates: If you have many different values, how do you select among them? Do you just roll a die?

Polus: No, you don’t simply add the values together. Being too low on one value is much worse than having a little bit more of another value. That way, you might compromise between values but you can never completely sacrifice, for example, all freedom, just to increase happiness. The entirety of the system is stable and safe.

submitted by /u/eliyah23rd
[link] [comments]


Source: Reditt

%d bloggers like this: