Thirty years ago, when the Internet was in its infancy and even personal computers were rare, a technology writer embarked on a cutting-edge romp through the social circles of his time. After interacting with some of the world's most intellectual minds, he got a glimpse of the future and eventually wrote a major book, Out of Control (1994). More than a decade later, when the era of mobile Internet and smartphones arrived, people were surprised to find that the latest concepts "global information interconnection, distributed systems, digital money, cloud computing, etc." had been predicted by this book.
In 2010, the Chinese version of "Out of Control" was launched, and KK started a series of visits in China, and gradually became known to the Chinese people.
At this point in time, General Artificial Intelligence (AGI) is budding and an era of change is about to begin, which vaguely echoes the prophecy in Runaway, KK predicted at the time that humans would eventually unite with machines, "machines are becoming biological and biological beings are becoming engineered. In his own words, "Out of Control" is only about how to build a "complex system" - robots, artificial intelligence (AI), etc. can be regarded as complex systems, and he believes that this idea is still relevant today.
Not long ago, we had a conversation with KK, the founder & CEO of the big model startup Gateways, together with Li Zhifei.
The former discussed the rise, impact and threats of AI from the practical perspective of technology and business, and the latter from the abstract perspective of humanity, history, and even the universe. This is a rather "science fiction" and "imaginative" discussion, where we try to "predict" the future of human and AI again.
Both agreed that today's AI has taken the form of life for the first time in human history. C.F. Lee says that current AI is already equivalent to the IQ of a human child, and already possesses true general intelligence capabilities such as knowledge, logic, reasoning, and planning. KK believes that as "silicon-based life", AI will be as adaptable as humans, capable of self-learning and growth.
The two believe in technological optimism, and KK believes that AI will not put humans out of work, but will only make them more efficient and free them from their "abominable" jobs, and that AI may seem to widen the gap between the rich and the poor, but it is more likely to make the pie bigger and better achieve justice, just like every technological change in human history. And in terms of its impact on business, it will empower individuals, small and medium-sized companies, and large corporations all at the same time.
As the title of the book "Out of Control" suggests - Kevin Kelly feels that in order to better access the power of AI, humans should allow parts of AI to "get out of control". After all, ChatGPT was born out of the 'emergent' state of humans out of control. Both he and Zhifei Li believe that AI's intelligence is still nascent, and that humans should continue to let go rather than control (regulate) prematurely and excessively. The latter could choke innovation in its cradle.
Taking a longer view, if AI does become a super lifeform one day, how exactly should humans get along with it?
Science fiction and movies depict a gloomy future that often ends with AI killing humans after they awaken to themselves. But Kevin Kelly argues that this is a sign of humanity's lack of imagination. Failure is always easier to imagine and more seductive to the story than success. With respect to the future, the benefits and value of AI are not difficult to imagine.
His suggestion is to think of AI as "artificial aliens," where humans will be able to use AI's intelligence to solve their own problems and achieve a degree of "control. Li believes that, as the "ancestor" of AI, humans will eventually integrate with, rather than control, AI.
As to whether this prediction will be true, it will depend on the world in 5000 days.
The following is the full text of the conversation:
01Even if OpenAI doesn't make ChatGPT, someone else will soon
Peng Zhang: Regardless of China and the US, people are talking about big models these days. How do you feel about this wave of technological change of big models? Do you feel very shocked?
Kevin Kelly: Artificial intelligence (AI) has been around for decades. There was a major leap when AI models started using neural networks and deep learning, and they got bigger and bigger. About 4 or 5 years ago, it started to produce very large transformer models (deep learning models that use a self-attentive mechanism). And in the last two years, it started interfacing with large language models.
In fact, the major change in AI recently is not in its capabilities, which are not much better. The real change in AI lately is that it has given us a conversational user interface that allows us to actually communicate with AI in (natural) language. Before that, people had to learn a lot of programming and become very proficient at it to use it, but now, all of a sudden, everyone can use it. That's the really exciting part.
Zhifei Li: I agree with KK that the main change in the current big language model is the improvement in natural language interaction. This is really something that many ordinary people can feel, and that's why ChatGPT is having such a big social impact today.
But I think today's big models have also changed a lot in terms of capability, and it is this change that makes it possible for natural interaction. The AI must have a breakthrough in these basic capabilities in order to realize natural language interaction.
And I think the ability of the future big model will not only be natural language interaction. For example, things like writing programs, enterprise automation processes, robot automation, etc. are not interactive, but they will be possible in the future.
Peng Zhang: I strongly agree that AI's ability to talk better with humans will bring a wave of paradigm revolution in technology and business. If ChatGPT is the second curve of AI development, is its emergence inevitable or accidental?
Kevin Kelly: ChatGPT's capabilities have exceeded everyone's expectations. I don't think anyone saw this coming, including people in the AI field. In fact, most researchers didn't know how ChatGPT worked, and they tried to work on improving it, but it was difficult because it wasn't known how it worked. Therefore, the emergence of ChatGPT was an accident.
Although ChatGPT was very surprising, we also saw its limitations. Its main limitation is that the model is trained on the average level of human-created content, so it tends to produce average results. And we often want something that is not average, but above average. This is hard to do.
Second, the optimization goal of these models is plausibility, not accuracy. Therefore, how to make them more accurate is a major challenge that everyone is currently grappling with.
Zhifei Li: I think the emergence of ChatGPT is accidental in the short term, but inevitable in the long term. We can say that it was a coincidence that OpenAI made ChatGPT at the end of last year, but even if it didn't, there will be other companies making it soon.
This has been repeated countless times in the history of technology. Take deep learning as an example, AlexNet was the first to make image annotation (image annotation) in 2012, but at that time many teams with strong belief and engineering ability were also doing it, so if AlexNet didn't do it, others would do it too. In 2017, Google made transformer, which is solving the problem of low scalability of RNN and LSTM sequential models, if Google did not do it, other teams will do it.
The background of the birth of ChatGPT is that transformer has been very mature and we have strong computing power to train the massive amount of data on the Internet, all the necessary elements of ChatGPT are already available. It was just that OpenAI put it together best at that point in time.
Peng Zhang: Talking about inevitability and serendipity reminds me of a very hot word "emergence" recently. This word appeared at least 88 times in KK's book "Out of Control", how should we understand the meaning of the word emergence today?
Kevin Kelly: In English terms, 'emergence' is a term that refers to the behavior of a system, a whole bunch of interconnected things like the Internet, robots, bodies, ecosystems, or even the entire world, whose behavior is not contained in the behavior of any individual part. For example, a hive can remember things beyond the lifespan of a single bee, so the hive itself has a behavior, ability and power that the individual parts do not. We call this 'emergence'.
Similarly, much of the AI being generated is 'emergent'. Because there is no one place in the model that can explain its origin, it requires all parts to work together to generate this new capability. Just as there is no one place in our brain where we have 'thought', 'thought' 'emerges' from whole neurons. Things like thinking, evolution, and learning can all 'emerge' from the system.
Zhifei Li: My understanding of "emergence" comes from the book "Complexity", which talks about "more is different", "more is different". Just like an old Chinese saying, "Quantity leads to quality". The first time the big model talked about "emergence" was an article published by Stanford and Google at the end of last year. They found that by increasing the scale of a large model, when it reaches a certain threshold, it will suddenly "emerge" with a certain ability.
I now feel that the word "emergence" has been misused. We call it "emergence" because we can't explain how the ability of large models comes from, and this word is not explainable or manipulable, and it doesn't help us train and apply large models. Now, we no longer study "emergence", but more on the quantitative relationship between the number of parameters and the final performance of large models, which may be more helpful for us to understand and control large models.
Peng Zhang: Can we understand that "emergence" will lead to "loss of control"?
Kevin Kelly: That's not exactly the right understanding. Of course, there will be a 'loss of control' part, and if you want to be able to harness the power of 'emergent' behavior, you may need to tolerate some things out of your control. At the moment in AI, we may not understand and control it well enough, but that's actually necessary to get the best results.
But at the same time, we can't let everything get 'out of control'; we have to exercise some degree of 'control', that is, to guide and manage AI. Again, we don't want to be overly restrictive, but we must achieve a certain level of control. However, we will likely never be able to fully control them, especially with more powerful AI, and we will likely never fully understand how they work. That's the trade-off involved.
Peng Zhang: It's been many years since you wrote Runaway. Do you think there are any parts of Runaway that are worth recalibrating at this point in time, in light of this wave of the AI revolution?
Kevin Kelly: I don't think I talk too much about AI and loss of control in Runaway, it's really mostly about how to make simple things into complex things. There's something called Rodney Brooks' subsystem architecture, and it talks about how you can make a complex robot by embedding parts of intelligence into it. This process leading to complexity is that you overlay other things on top of things that are already working properly. (Brooks' theory of architecture proposes that higher-level behavior needs to accommodate lower-level behavior.)
Like an insect, it can walk even if you cut off its head, because the function of walking is done more locally. Just like our brain has a core that is responsible for breathing and other autonomous functions, we are adding more layers of complexity to that. This idea is still valid today when people are building robots and AI and trying to make them more complex. That's really the only thing I talk about in Out of Control, and I think that idea still holds true.
Peng Zhang: I remember you had an interesting perspective earlier, which was "assuming technology is a life," when it emerges with near-human intelligence, what will it want next? What impact will this have on the business sector and human society?
Kevin Kelly: Technology is something I call the "seventh kingdom of life. We have accelerated the evolution of life into a 'dry' realm where it no longer needs a 'wet' environment, but can exist in silicon. We can use our minds to make other life-like technologies. They are adaptive and can learn and grow.
My point is that basically technologies will pursue the same things as life. For example, they will evolve to increase in diversity and also become more specialized and specific. Our bodies have 52 different kinds of cells, including heart cells, bone cells, and skeletal cells. We will also make specialized AI that performs specific tasks such as language translation, image generation, and autonomous driving. It is also clear that technology will become more complex, just like life.
Finally, technology will also be as "mutually beneficial and symbiotic" as life. Life is so complex in its evolution that it only comes into contact with other life and never with non-living material. Take, for example, the bacteria in your gut. They are only surrounded by other living cells. In the future, there will also be AI that is not designed for humans, but specifically for other AI. For example, there will be AI that is dedicated to maintaining other AI, AI that only communicates with other AI.
Zhifei Li: I would like to explain the relationship between AI and life from an engineer's perspective. A few years ago, many people kept asking me, "How old is Alphago (the first computer program to beat the Go world champion) in terms of IQ?" I didn't like this kind of question at that time because I couldn't answer it. At that time, the AI could play Go and had a high IQ, but it could not carry out natural language conversations like a 3-year-old. At that time, its mechanism was fundamentally different from that of a human.
But these days, I especially like to compare AI to a child. I think the core reason is that today's AI already has the real general intelligence capabilities that children have, such as knowledge, logic, reasoning, planning, and so on. So I would say that today's AI is more like a living being. It's like a 5-year-old in terms of IQ, and it may be like a college professor or a newborn baby in terms of knowledge, depending on whether it has seen the data.
Thirty years ago, when the Internet was in its infancy and even personal computers were rare, a technology writer embarked on a cutting-edge romp through the social circles of his time. After interacting with some of the world's most intellectual minds, he got a glimpse of the future and eventually wrote a major book, Out of Control (1994). More than a decade later, when the era of mobile Internet and smartphones arrived, people were surprised to find that the latest concepts "global information interconnection, distributed systems, digital money, cloud computing, etc." had been predicted by this book.
In 2010, the Chinese version of "Out of Control" was launched, and KK started a series of visits in China, and gradually became known to the Chinese people.
At this point in time, General Artificial Intelligence (AGI) is budding and an era of change is about to begin, which vaguely echoes the prophecy in Runaway, KK predicted at the time that humans would eventually unite with machines, "machines are becoming biological and biological beings are becoming engineered. In his own words, "Out of Control" is only about how to build a "complex system" - robots, artificial intelligence (AI), etc. can be regarded as complex systems, and he believes that this idea is still relevant today.
Not long ago, we had a conversation with KK, the founder & CEO of the big model startup Gateways, together with Li Zhifei.
The former discussed the rise, impact and threats of AI from the practical perspective of technology and business, and the latter from the abstract perspective of humanity, history, and even the universe. This is a rather "science fiction" and "imaginative" discussion, where we try to "predict" the future of human and AI again.
Both agreed that today's AI has taken the form of life for the first time in human history. C.F. Lee says that current AI is already equivalent to the IQ of a human child, and already possesses true general intelligence capabilities such as knowledge, logic, reasoning, and planning. KK believes that as "silicon-based life", AI will be as adaptable as humans, capable of self-learning and growth.
The two believe in technological optimism, and KK believes that AI will not put humans out of work, but will only make them more efficient and free them from their "abominable" jobs, and that AI may seem to widen the gap between the rich and the poor, but it is more likely to make the pie bigger and better achieve justice, just like every technological change in human history. And in terms of its impact on business, it will empower individuals, small and medium-sized companies, and large corporations all at the same time.
As the title of the book "Out of Control" suggests - Kevin Kelly feels that in order to better access the power of AI, humans should allow parts of AI to "get out of control". After all, ChatGPT was born out of the 'emergent' state of humans out of control. Both he and Zhifei Li believe that AI's intelligence is still nascent, and that humans should continue to let go rather than control (regulate) prematurely and excessively. The latter could choke innovation in its cradle.
Taking a longer view, if AI does become a super lifeform one day, how exactly should humans get along with it?
Science fiction and movies depict a gloomy future that often ends with AI killing humans after they awaken to themselves. But Kevin Kelly argues that this is a sign of humanity's lack of imagination. Failure is always easier to imagine and more seductive to the story than success. With respect to the future, the benefits and value of AI are not difficult to imagine.
His suggestion is to think of AI as "artificial aliens," where humans will be able to use AI's intelligence to solve their own problems and achieve a degree of "control. Li believes that, as the "ancestor" of AI, humans will eventually integrate with, rather than control, AI.
As to whether this prediction will be true, it will depend on the world in 5000 days.
The following is the full text of the conversation:
01Even if OpenAI doesn't make ChatGPT, someone else will soon
Peng Zhang: Regardless of China and the US, people are talking about big models these days. How do you feel about this wave of technological change of big models? Do you feel very shocked?
Kevin Kelly: Artificial intelligence (AI) has been around for decades. There was a major leap when AI models started using neural networks and deep learning, and they got bigger and bigger. About 4 or 5 years ago, it started to produce very large transformer models (deep learning models that use a self-attentive mechanism). And in the last two years, it started interfacing with large language models.
In fact, the major change in AI recently is not in its capabilities, which are not much better. The real change in AI lately is that it has given us a conversational user interface that allows us to actually communicate with AI in (natural) language. Before that, people had to learn a lot of programming and become very proficient at it to use it, but now, all of a sudden, everyone can use it. That's the really exciting part.
Zhifei Li: I agree with KK that the main change in the current big language model is the improvement in natural language interaction. This is really something that many ordinary people can feel, and that's why ChatGPT is having such a big social impact today.
But I think today's big models have also changed a lot in terms of capability, and it is this change that makes it possible for natural interaction. The AI must have a breakthrough in these basic capabilities in order to realize natural language interaction.
And I think the ability of the future big model will not only be natural language interaction. For example, things like writing programs, enterprise automation processes, robot automation, etc. are not interactive, but they will be possible in the future.
Peng Zhang: I strongly agree that AI's ability to talk better with humans will bring a wave of paradigm revolution in technology and business. If ChatGPT is the second curve of AI development, is its emergence inevitable or accidental?
Kevin Kelly: ChatGPT's capabilities have exceeded everyone's expectations. I don't think anyone saw this coming, including people in the AI field. In fact, most researchers didn't know how ChatGPT worked, and they tried to work on improving it, but it was difficult because it wasn't known how it worked. Therefore, the emergence of ChatGPT was an accident.
Although ChatGPT was very surprising, we also saw its limitations. Its main limitation is that the model is trained on the average level of human-created content, so it tends to produce average results. And we often want something that is not average, but above average. This is hard to do.
Second, the optimization goal of these models is plausibility, not accuracy. Therefore, how to make them more accurate is a major challenge that everyone is currently grappling with.
Zhifei Li: I think the emergence of ChatGPT is accidental in the short term, but inevitable in the long term. We can say that it was a coincidence that OpenAI made ChatGPT at the end of last year, but even if it didn't, there will be other companies making it soon.
This has been repeated countless times in the history of technology. Take deep learning as an example, AlexNet was the first to make image annotation (image annotation) in 2012, but at that time many teams with strong belief and engineering ability were also doing it, so if AlexNet didn't do it, others would do it too. In 2017, Google made transformer, which is solving the problem of low scalability of RNN and LSTM sequential models, if Google did not do it, other teams will do it.
The background of the birth of ChatGPT is that transformer has been very mature and we have strong computing power to train the massive amount of data on the Internet, all the necessary elements of ChatGPT are already available. It was just that OpenAI put it together best at that point in time.
Peng Zhang: Talking about inevitability and serendipity reminds me of a very hot word "emergence" recently. This word appeared at least 88 times in KK's book "Out of Control", how should we understand the meaning of the word emergence today?
Kevin Kelly: In English terms, 'emergence' is a term that refers to the behavior of a system, a whole bunch of interconnected things like the Internet, robots, bodies, ecosystems, or even the entire world, whose behavior is not contained in the behavior of any individual part. For example, a hive can remember things beyond the lifespan of a single bee, so the hive itself has a behavior, ability and power that the individual parts do not. We call this 'emergence'.
Similarly, much of the AI being generated is 'emergent'. Because there is no one place in the model that can explain its origin, it requires all parts to work together to generate this new capability. Just as there is no one place in our brain where we have 'thought', 'thought' 'emerges' from whole neurons. Things like thinking, evolution, and learning can all 'emerge' from the system.
Zhifei Li: My understanding of "emergence" comes from the book "Complexity", which talks about "more is different", "more is different". Just like an old Chinese saying, "Quantity leads to quality". The first time the big model talked about "emergence" was an article published by Stanford and Google at the end of last year. They found that by increasing the scale of a large model, when it reaches a certain threshold, it will suddenly "emerge" with a certain ability.
I now feel that the word "emergence" has been misused. We call it "emergence" because we can't explain how the ability of large models comes from, and this word is not explainable or manipulable, and it doesn't help us train and apply large models. Now, we no longer study "emergence", but more on the quantitative relationship between the number of parameters and the final performance of large models, which may be more helpful for us to understand and control large models.
Peng Zhang: Can we understand that "emergence" will lead to "loss of control"?
Kevin Kelly: That's not exactly the right understanding. Of course, there will be a 'loss of control' part, and if you want to be able to harness the power of 'emergent' behavior, you may need to tolerate some things out of your control. At the moment in AI, we may not understand and control it well enough, but that's actually necessary to get the best results.
But at the same time, we can't let everything get 'out of control'; we have to exercise some degree of 'control', that is, to guide and manage AI. Again, we don't want to be overly restrictive, but we must achieve a certain level of control. However, we will likely never be able to fully control them, especially with more powerful AI, and we will likely never fully understand how they work. That's the trade-off involved.
Peng Zhang: It's been many years since you wrote Runaway. Do you think there are any parts of Runaway that are worth recalibrating at this point in time, in light of this wave of the AI revolution?
Kevin Kelly: I don't think I talk too much about AI and loss of control in Runaway, it's really mostly about how to make simple things into complex things. There's something called Rodney Brooks' subsystem architecture, and it talks about how you can make a complex robot by embedding parts of intelligence into it. This process leading to complexity is that you overlay other things on top of things that are already working properly. (Brooks' theory of architecture proposes that higher-level behavior needs to accommodate lower-level behavior.)
Like an insect, it can walk even if you cut off its head, because the function of walking is done more locally. Just like our brain has a core that is responsible for breathing and other autonomous functions, we are adding more layers of complexity to that. This idea is still valid today when people are building robots and AI and trying to make them more complex. That's really the only thing I talk about in Out of Control, and I think that idea still holds true.
Peng Zhang: I remember you had an interesting perspective earlier, which was "assuming technology is a life," when it emerges with near-human intelligence, what will it want next? What impact will this have on the business sector and human society?
Kevin Kelly: Technology is something I call the "seventh kingdom of life. We have accelerated the evolution of life into a 'dry' realm where it no longer needs a 'wet' environment, but can exist in silicon. We can use our minds to make other life-like technologies. They are adaptive and can learn and grow.
My point is that basically technologies will pursue the same things as life. For example, they will evolve to increase in diversity and also become more specialized and specific. Our bodies have 52 different kinds of cells, including heart cells, bone cells, and skeletal cells. We will also make specialized AI that performs specific tasks such as language translation, image generation, and autonomous driving. It is also clear that technology will become more complex, just like life.
Finally, technology will also be as "mutually beneficial and symbiotic" as life. Life is so complex in its evolution that it only comes into contact with other life and never with non-living material. Take, for example, the bacteria in your gut. They are only surrounded by other living cells. In the future, there will also be AI that is not designed for humans, but specifically for other AI. For example, there will be AI that is dedicated to maintaining other AI, AI that only communicates with other AI.
Zhifei Li: I would like to explain the relationship between AI and life from an engineer's perspective. A few years ago, many people kept asking me, "How old is Alphago (the first computer program to beat the Go world champion) in terms of IQ?" I didn't like this kind of question at that time because I couldn't answer it. At that time, the AI could play Go and had a high IQ, but it could not carry out natural language conversations like a 3-year-old. At that time, its mechanism was fundamentally different from that of a human.
But these days, I especially like to compare AI to a child. I think the core reason is that today's AI already has the real general intelligence capabilities that children have, such as knowledge, logic, reasoning, planning, and so on. So I would say that today's AI is more like a living being. It's like a 5-year-old in terms of IQ, and it may be like a college professor or a newborn baby in terms of knowledge, depending on whether it has seen the data.

Subscribe to yaletown

Subscribe to yaletown
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No activity yet