Personal blog. All views are mine only.


Personal blog. All views are mine only.
Share Dialog
Share Dialog

Subscribe to Amal Varghese

Subscribe to Amal Varghese
Illustration: hakule/Getty Images
This blog summarises key concepts proposed by David Deutsch in the chapter, Artificial Creativity, in The Beginning of Infinity.
Turing understood that AI must in principle be possible because a universal computer is a universal simulator. He famously argued that ‘a machine can think on the groups of universality’. He proposed a test for whether a program had achieved it, now known as the Turing Test.
The Turing Test
It is simply that a suitable (human) judge be unable to tell whether the program is human or not. In the program, he suggested that both the program and a genuine human should separately interact with the judge via some purely textual medium such as a teleprinter, so that only the thinking abilities of the candidates would be tested, not their appearance. In 1964, to “pass the test” Joseph Weizenbaum wrote a program called Eliza, designed to imitate a psychotherapist. A psychotherapist was chosen because they – as professionals – generally give opaque answers and therefore are easy to imitate. That is, they give opaque answers about themselves, or itself in this case. The program’s questions are based on the user’s own questions and statements; a remarkably simple program. A typical one has two basic strategies. First, it scans the input for certain keywords and grammatical forms. Second, it might build up a database of previous conversations, enabling the program simply to repeat phrases that other users have typed in, again choosing them according to keyboards found in the current user’s input. Weizenbaum was shocked that may people were fooled by it. I suppose it is proof of the intense human desire for company and social interaction. Moreover, even after people had been told that it was not a genuine AI, they would sometimes continue to have long conversations with it about their personal problems, exactly as though they believed that it understood them.
The probability that outputs of templates will resemble the products of human thought diminishes exponentially with the number of utterances. Programs written today, over a quarter of a century later, are still no better at the task of seeming to think than Eliza was; today’s versions are chatbots. Anyone that has engaged with chatbots will probably agree that they are not very useful because they essentially function as responses to frequently asked questions.
Flaws of the Turing Test
The Turing Test has two main flaws, as a test for determining artificial intelligence:
Requiring the program to pretend to be human is both biased and not relevant to whether it can think.
The human participant in the test might intentionally imitate a chatbot – thus spoiling the test.
Turing’s test was valuable for explaining the significance of universality and for criticising the ancient, anthropocentric assumptions that would have ruled out the possibility of artificial intelligence. Turing’s mistake was that the test was rooted in the empiricist’s mistake of seeking a purely behavioural criterion: it requires the judge to come to a conclusion without any explanation of how the candidate AI is supposed to work. But, in reality, judging whether something is a genuine AI will always depend on explanations of how it works. If it was the designer, then the program is not an AI. If it was the program itself, then it is an AI.
The importance of explanations
A good test needs a good explanation. An explanation, even in the absence of any specific output would suffice too. Why? Because it explains how it could happen, and why we should expect it to happen given how the program works.
A good explanatory test to “see if the program is thinking is to have a conversation with the program over a diverse range of topics, and pay attention to the various purposes that came up. AI abilities must have some sort of universality: special-purpose thinking would not count as thinking. Let’s assume that AI, universal explains and constructors (which includes consciousness) are at the same level, as they’d arrived together in humans.
In a jump to universality, we should expect AI to be achieved. By contrast, the ability to imitate humans is not a form of universality. Hence, we can invalidate the Turing test. Simply put: becoming better at pretending to thinking is not the same as coming closer to being able to think).
Artificial evolution
What is artificial evolution? It takes two forms:
Creativity – human thought is required to solve a problem.
Running a program – use feedback to modify the program.
On the second form, you may simply elect to delegate the “running” (trials and errors) to a computer, using a so-called evolutionary algorithm. Thus, running many trials, each time result in a slight variation of that first program. But even in this case, it isn’t really evolution in that no real creation of new knowledge has occurred. All variants are forks of the initial program. And we have an obvious explanation of its abilities – namely the creativity of the programmer.
The Turing-test idea makes us think that, if it is given enough standard templates, an Eliza(like) program will automatically be creating knowledge; artificial evolution makes us think that if we have variation and selection, then evolution (of adaptations) will automatically happen. But neither is necessarily so. In both cases, another possibility is that no knowledge at all will be created during the running of the program, only during its development by the programmer. One thing that always seems to happen with such projects is that after they have achieved their intended aim, if the ‘evolutionary’ program is allowed to run further, it produces no further improvements. Similarly, the same result can be seen in attempts to evolve simulated organisms in a virtual environment, and the kind that puts virtual species against each other.
In both the case of AI and artificial evolution, we do not know how these phenomena – and their universality – were achieved in nature. We do not know why the DNA code, which evolved to describe the bacteria has enough reach to describe dinosaurs and humans. And though it seems obvious that AI will have qualia and consciousness, we cannot explain those things. They cannot be simulated in a computer program. Once we understand them, artificially implementing evolution and intelligence and its constellation of associated attributes will be no great effort.
In sum:
The field of AGI has made no progress because there is an unsolved philosophical problem at its heart. We do not understand how creativity works.
Once we know how creativity works, programming it will not be difficult.
Even Artificial Evolution may not have been achieved yet, despite its appearances. There the problem is that we do not understand the nature of the universality of the DNA replication system (see my earlier blog on universality).
Illustration: hakule/Getty Images
This blog summarises key concepts proposed by David Deutsch in the chapter, Artificial Creativity, in The Beginning of Infinity.
Turing understood that AI must in principle be possible because a universal computer is a universal simulator. He famously argued that ‘a machine can think on the groups of universality’. He proposed a test for whether a program had achieved it, now known as the Turing Test.
The Turing Test
It is simply that a suitable (human) judge be unable to tell whether the program is human or not. In the program, he suggested that both the program and a genuine human should separately interact with the judge via some purely textual medium such as a teleprinter, so that only the thinking abilities of the candidates would be tested, not their appearance. In 1964, to “pass the test” Joseph Weizenbaum wrote a program called Eliza, designed to imitate a psychotherapist. A psychotherapist was chosen because they – as professionals – generally give opaque answers and therefore are easy to imitate. That is, they give opaque answers about themselves, or itself in this case. The program’s questions are based on the user’s own questions and statements; a remarkably simple program. A typical one has two basic strategies. First, it scans the input for certain keywords and grammatical forms. Second, it might build up a database of previous conversations, enabling the program simply to repeat phrases that other users have typed in, again choosing them according to keyboards found in the current user’s input. Weizenbaum was shocked that may people were fooled by it. I suppose it is proof of the intense human desire for company and social interaction. Moreover, even after people had been told that it was not a genuine AI, they would sometimes continue to have long conversations with it about their personal problems, exactly as though they believed that it understood them.
The probability that outputs of templates will resemble the products of human thought diminishes exponentially with the number of utterances. Programs written today, over a quarter of a century later, are still no better at the task of seeming to think than Eliza was; today’s versions are chatbots. Anyone that has engaged with chatbots will probably agree that they are not very useful because they essentially function as responses to frequently asked questions.
Flaws of the Turing Test
The Turing Test has two main flaws, as a test for determining artificial intelligence:
Requiring the program to pretend to be human is both biased and not relevant to whether it can think.
The human participant in the test might intentionally imitate a chatbot – thus spoiling the test.
Turing’s test was valuable for explaining the significance of universality and for criticising the ancient, anthropocentric assumptions that would have ruled out the possibility of artificial intelligence. Turing’s mistake was that the test was rooted in the empiricist’s mistake of seeking a purely behavioural criterion: it requires the judge to come to a conclusion without any explanation of how the candidate AI is supposed to work. But, in reality, judging whether something is a genuine AI will always depend on explanations of how it works. If it was the designer, then the program is not an AI. If it was the program itself, then it is an AI.
The importance of explanations
A good test needs a good explanation. An explanation, even in the absence of any specific output would suffice too. Why? Because it explains how it could happen, and why we should expect it to happen given how the program works.
A good explanatory test to “see if the program is thinking is to have a conversation with the program over a diverse range of topics, and pay attention to the various purposes that came up. AI abilities must have some sort of universality: special-purpose thinking would not count as thinking. Let’s assume that AI, universal explains and constructors (which includes consciousness) are at the same level, as they’d arrived together in humans.
In a jump to universality, we should expect AI to be achieved. By contrast, the ability to imitate humans is not a form of universality. Hence, we can invalidate the Turing test. Simply put: becoming better at pretending to thinking is not the same as coming closer to being able to think).
Artificial evolution
What is artificial evolution? It takes two forms:
Creativity – human thought is required to solve a problem.
Running a program – use feedback to modify the program.
On the second form, you may simply elect to delegate the “running” (trials and errors) to a computer, using a so-called evolutionary algorithm. Thus, running many trials, each time result in a slight variation of that first program. But even in this case, it isn’t really evolution in that no real creation of new knowledge has occurred. All variants are forks of the initial program. And we have an obvious explanation of its abilities – namely the creativity of the programmer.
The Turing-test idea makes us think that, if it is given enough standard templates, an Eliza(like) program will automatically be creating knowledge; artificial evolution makes us think that if we have variation and selection, then evolution (of adaptations) will automatically happen. But neither is necessarily so. In both cases, another possibility is that no knowledge at all will be created during the running of the program, only during its development by the programmer. One thing that always seems to happen with such projects is that after they have achieved their intended aim, if the ‘evolutionary’ program is allowed to run further, it produces no further improvements. Similarly, the same result can be seen in attempts to evolve simulated organisms in a virtual environment, and the kind that puts virtual species against each other.
In both the case of AI and artificial evolution, we do not know how these phenomena – and their universality – were achieved in nature. We do not know why the DNA code, which evolved to describe the bacteria has enough reach to describe dinosaurs and humans. And though it seems obvious that AI will have qualia and consciousness, we cannot explain those things. They cannot be simulated in a computer program. Once we understand them, artificially implementing evolution and intelligence and its constellation of associated attributes will be no great effort.
In sum:
The field of AGI has made no progress because there is an unsolved philosophical problem at its heart. We do not understand how creativity works.
Once we know how creativity works, programming it will not be difficult.
Even Artificial Evolution may not have been achieved yet, despite its appearances. There the problem is that we do not understand the nature of the universality of the DNA replication system (see my earlier blog on universality).
<100 subscribers
<100 subscribers
No activity yet