<100 subscribers
Share Dialog
Share Dialog
AIGC big model has released the market's imagination of avatars.
Put a "holster" on ChatGPT, and it should be a very good virtual human. If I think about it more, I can take all my historical data, train out a virtual doppelganger of me, and let this "me" be my secretary. Distributed, customized AIGC, every person in the world can have a virtual doppelganger. Some unimportant work and social interactions, let the doppelgangers in the virtual space automatically completed.
Envision things further down the line and things get wild. A ready-made concern comes from the movie "Wandering Earth": humans leave too many decisions to the AI, only to have the AI pinch its fingers and find that taking out humans is the optimal solution for the world.
MOSS calculations found that the optimal option to preserve human civilization is to destroy the human race. | Photo credit: "Wandering Earth 2" stills
We are not ethically prepared for a world filled with avatars.
The future trend of avatar governance and compliance is indeed at stake. Recently, officials have started to focus their attention on the application level of AI. An important event is the planned vote on the AI Law in the EU Parliament at the end of this month. And the standard development project of Trusted Virtual Human Generated Content Management System Technical Requirements, which was launched by ICTI the day before, and is preparing a White Paper on Trusted Virtual Human, is targeting the Virtual Human AIGC directly.
What kind of social responsibility logic should the virtual human follow if it gains a more powerful intelligent core?
The essence of governing virtual human is still governing real people
A game practitioner told Tiger, "There are many definitions and application scenarios for virtual humans, but in the end, virtual humans can be divided into two kinds, those with soul and those without soul".
At the moment, the "soul" of the avatar is very expensive. It requires technical solutions on the one hand, and continuous investment in content on the other. In the C-end consumer market, the "soul" or "personality" is the core competitiveness of the virtual human IP: it is not difficult to make the virtual human have a good-looking skin, but an interesting soul to attract fans to continue to pay attention. This often requires an entire content planning and production team behind the scenes, as well as some qualified "people in the middle". Some virtual idols also need user co-creation and user-generated content to enrich the role of personality. In addition, virtual people need considerable technical investment to appear agile in expression, voice, movement and interaction. Under the constraints of these costs, even the virtual people of the big companies are not good at all of the above.
The question of what kind of "person" the avatar becomes is an operational one, not a technical one - for now and for the foreseeable future. The personality of the virtual person is given by the operator, the user, and the people in it; the technology is only the means to achieve it. So the governance for virtual people is still mainly to manage the real people behind the virtual people.
The current domestic Internet ecology, has begun to deal with the original version of this type of problem. For example, in 2020, famous host He Gui took a technology company to court because the company's APP product provided users with a custom chatbot service, so some users "tuned" a chatbot using He Gui's name and likeness. The Beijing Internet Court found that the defendant's product not only violated He Gui's portrait rights, but also had a potential negative impact on the plaintiff's personal freedom and dignity, and ruled that the defendant should apologize and pay damages.
Chatbots, or more complex AI applications, involve some legal issues that can be solved with existing logic. The regulatory and enforcement authorities only need to draw the "red line" for traditional media and Internet governance a little longer to cover AIGC and avatars as well.
On January 10, 2023, the Internet Information Office, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued the "Regulations on the Management of Deep Synthesis of Internet Information Services" (hereinafter referred to as the "Regulations") came into effect. The key word of the regulation "Deep Synthesis" (Deep Synthesis), refers to the use of deep learning or virtual reality methods to generate digital works of technology, covering AI-generated text, images and sound, but also synthetic human voice, face replacement, simulation space and other digital products - this is mainly in the field of AI in the past two years. -This is mainly the application of the AI field out of the loop two years ago, but also able to include AIGC in the scope of compliance.
The specifics in the Regulations are largely a necessary extension of previous regulatory requirements. For example, information services have traditionally been required to ensure that the information security attributes of their products do not infringe on the privacy, portrait rights, personality rights, or intellectual property rights of others; these principles now also apply to deeply synthetic products or services per the Regulations. Similarly, the Regulations also require content platforms to fulfill their audit obligations to ensure that deeply synthesized works published on their platforms comply with laws and regulations and do not endanger national security and social stability.
From the existing new regulations and jurisprudence, we can see that although there are currently controversies in copyright, personality rights, and content security for virtual people, these controversies are nothing to fight about in most cases. There is nothing new on the line issue. All the industry needs is a clear set of compliance principles and liability determinations, and then to get on with the job in their respective roles.
A spokesperson for Shangtang Technology, a leader in AI digital human technology, told Tiger: "The Regulations give a division of responsibility that is more in line with current business logic, providing 'technology supporters', 'content providers' and other industry players with clearer compliance expectations, and it also helps regulate the market and boost consumer confidence."
Don't be carried away by sci-fi
There's a market for talk of worrying about the ethical risks of avatars. But the role of the wishy-washy is not appropriate for government regulation to lead the way. The exploration of social responsibility of virtual humans still needs to be done more spontaneously by companies and the market. There is a lot of room for corporate social responsibility here.
Here's a freshly minted counterpoint: the EU's Artificial Intelligence Law. The law, drafted in 2021, is due to be voted on in the European Parliament at the end of March (a few days after this article goes to press).
But it doesn't matter if we vote or not. Because the ChatGPT has already subverted part of the original purpose of the AI Law. For example, the EU AI Law proposes to ban some AI applications that are contrary to human rights, such as certain face recognition applications. But it is difficult to determine the riskiness of a general-purpose generative AI like ChatGPT using the original legislative thinking. In the spirit of the Artificial Intelligence Act, the EU could be tempted to classify GPT as "high-risk AI". But this is tantamount to stagnation.
In fact, even before ChatGPT was introduced, MEPs had already raised objections that the AI Law was too detailed in regulating the application of AI and would hinder the EU's technological innovation.
An AI ethics researcher told Tiger that in fact, EU legislation has been substantially hindering the development of AI technology since the General Data Protection Regulation (GDPR), which came into force in 2018. Many technology companies have struggled to achieve technological innovation due to the restrictions of this data security law.
The AI Law is another example of the EU legislature's style of action: intervene quickly, protect human rights, and maintain a full distrust of technology. The EU's hasty legislative regulation at a stage when AI technology is still evolving rapidly has led the AI Law to the awkward situation it is in today. This suggests that the governance of the ethical aspects of AI at this stage still needs to be led by business.
Although it is not appropriate for the legislative and executive branches to intervene too much in the field of AI governance, the conservative stance of the law is consistent.
Specifically on the subject of avatars, the law should not treat AI and avatars as people for the foreseeable future. In other words, even if AI is perfected, the law will not consider it to have dynamic and autonomous motives; the various risks involved in AI and virtual humans will eventually be traced to the relevant individuals or organizations.
This may seem very unromantic. The artificial intelligence creations in various science fiction works, and even the virtual idols and virtual internet celebrities in the real market, are all created as if they have their own independent thoughts and personalities and can make autonomous decisions. But from the mainstream attitude of the current jurisprudence, the law will not treat the virtual person as the subject of responsibility. All the mistakes made by the avatars will have to be borne by the technology providers, content providers, operators and other entities behind them. We cannot expect any liability or economic and political rights for the virtual person anytime soon. A virtual person is a work, just like the eggs you fry this morning are a work. The work should not be associated with "jail time" in the first place. If something goes wrong with your work, it is your responsibility.
At this point, we can reduce the romantic question of "how to give a qualified soul to a virtual person" to a question of corporate governance and social responsibility.
Responsible AI, a corporate governance topic
After several years of discussion and accumulation, the general ethical requirements of human beings for AI are usually summarized as the concepts of "responsible AI" or "trustworthy AI". Both expressions can also be applied to various sub-products of AI. As in the case of the standard to be developed by the ICT Academy, it is about "trusted virtual human content generation".
The general requirement of "responsible" or "trustworthy" can be broken down into several specific principles. Different technology companies have different ways of breaking it down, but basically it's pretty much the same. Take the industry pioneer Microsoft, for example, which summarizes responsible AI into six principles: fairness, security and reliability, privacy and data security, inclusiveness, transparency, and accountability.
In "Wandering Earth", for example, the MOSS AI concludes that "the best option to preserve human civilization is to destroy humanity" through calculations that are beyond human comprehension. From the principle of credible AI, MOSS is first of all opaque, a black box of calculation, it is hard to see which of her tendons is wrong to come to such an unbelievable conclusion; at the same time, it is also unsafe, unreliable and uncontrollable; finally, she is unaccountable, MOSS causes all kinds of problems to the human society in the work, but there is nothing people can do about her.
This AI in the movie is very capable of creating dramatic conflict, but the reality of AI ethics work, rather than hypothesize such a counter-intuitive AI and then feel bewildered, it is better to eliminate the seeds of runaway from the beginning.
The larger technology providers in the industry chain usually have a dedicated AI ethics department, and they develop technical solutions that promote fairness, security, transparency, and other characteristics based on a split of the principles of trustworthy AI, and open source the solutions. For example, the figure below shows some of the algorithms that IBM has open sourced to promote AI fairness.

But technology is far from the whole story. As the Trusted AI White Paper from the Academy of ICT and the Jingdong Exploration Institute says, there is no perfect technology; the key is how to use it.
Trusted AI is also a set of ways of working for companies, and indeed, a corporate culture. Microsoft's Office of Responsible AI website reveals that the team designs and develops AI products around the aforementioned Six Principles of Responsible AI, analyzes possible problems and omissions, and continuously monitors performance in operations. Microsoft also recommends that AI operations users do the same.
Internal brainstorming alone isn't enough. As Mira Murati, OpenAI's CTO, said in an interview, because GPT is a very general tool, it's actually difficult for developers to know all of its potential impacts and flaws in advance. That's why the company has opened up some of GPT's features to the public to explore potential problems with the technology.
What are the particular governance issues of trusted avatars
Does incorporating AIGC's avatars specifically raise any special governance issues? Some social responsibility thinking currently exists in the industry, except that the degree of systematization and industry consensus on these thoughts is somewhat lower. In this regard, ShangTang Technology, as one of the main compilers of the Technical Requirements for Trusted Virtual Human Generation Content Management System, revealed some initial thoughts on participating in the project to Tiger.
Regarding the special issue of trusted avatars, ShangTang mentioned that when AIGC technology is used for avatar generation, especially when it has intelligent interaction ability like ChatGPT, avatars can achieve the effect of "hyper-realism". The virtual human, which is hard to distinguish from the real one, may amplify the related risk and aggravate the possible harm to the person concerned. In this case, in addition to the current AIGC management method, responsible companies can use a series of technical solutions to strengthen the trustworthiness of avatars. The company hopes to advance an industry-wide consensus on this issue this year.
According to Shangtang Technology, in the Virtual Reality and Metaverse Industry Alliance (XRMA), member companies agree that the issue of "trustworthiness" is as important as industrial development, and that technological advancement and social responsibility cannot be neglected. Another consensus is that "avatars are not decoupled from reality" - AIGC is not decoupled from the will of real people, the ownership of avatars is not decoupled from the constraints of reality, and the industry is not decoupled from regulation. These consensus is expected to be further elaborated in the "Trusted Avatar AIGC" and future industry standards.
AIGC big model has released the market's imagination of avatars.
Put a "holster" on ChatGPT, and it should be a very good virtual human. If I think about it more, I can take all my historical data, train out a virtual doppelganger of me, and let this "me" be my secretary. Distributed, customized AIGC, every person in the world can have a virtual doppelganger. Some unimportant work and social interactions, let the doppelgangers in the virtual space automatically completed.
Envision things further down the line and things get wild. A ready-made concern comes from the movie "Wandering Earth": humans leave too many decisions to the AI, only to have the AI pinch its fingers and find that taking out humans is the optimal solution for the world.
MOSS calculations found that the optimal option to preserve human civilization is to destroy the human race. | Photo credit: "Wandering Earth 2" stills
We are not ethically prepared for a world filled with avatars.
The future trend of avatar governance and compliance is indeed at stake. Recently, officials have started to focus their attention on the application level of AI. An important event is the planned vote on the AI Law in the EU Parliament at the end of this month. And the standard development project of Trusted Virtual Human Generated Content Management System Technical Requirements, which was launched by ICTI the day before, and is preparing a White Paper on Trusted Virtual Human, is targeting the Virtual Human AIGC directly.
What kind of social responsibility logic should the virtual human follow if it gains a more powerful intelligent core?
The essence of governing virtual human is still governing real people
A game practitioner told Tiger, "There are many definitions and application scenarios for virtual humans, but in the end, virtual humans can be divided into two kinds, those with soul and those without soul".
At the moment, the "soul" of the avatar is very expensive. It requires technical solutions on the one hand, and continuous investment in content on the other. In the C-end consumer market, the "soul" or "personality" is the core competitiveness of the virtual human IP: it is not difficult to make the virtual human have a good-looking skin, but an interesting soul to attract fans to continue to pay attention. This often requires an entire content planning and production team behind the scenes, as well as some qualified "people in the middle". Some virtual idols also need user co-creation and user-generated content to enrich the role of personality. In addition, virtual people need considerable technical investment to appear agile in expression, voice, movement and interaction. Under the constraints of these costs, even the virtual people of the big companies are not good at all of the above.
The question of what kind of "person" the avatar becomes is an operational one, not a technical one - for now and for the foreseeable future. The personality of the virtual person is given by the operator, the user, and the people in it; the technology is only the means to achieve it. So the governance for virtual people is still mainly to manage the real people behind the virtual people.
The current domestic Internet ecology, has begun to deal with the original version of this type of problem. For example, in 2020, famous host He Gui took a technology company to court because the company's APP product provided users with a custom chatbot service, so some users "tuned" a chatbot using He Gui's name and likeness. The Beijing Internet Court found that the defendant's product not only violated He Gui's portrait rights, but also had a potential negative impact on the plaintiff's personal freedom and dignity, and ruled that the defendant should apologize and pay damages.
Chatbots, or more complex AI applications, involve some legal issues that can be solved with existing logic. The regulatory and enforcement authorities only need to draw the "red line" for traditional media and Internet governance a little longer to cover AIGC and avatars as well.
On January 10, 2023, the Internet Information Office, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued the "Regulations on the Management of Deep Synthesis of Internet Information Services" (hereinafter referred to as the "Regulations") came into effect. The key word of the regulation "Deep Synthesis" (Deep Synthesis), refers to the use of deep learning or virtual reality methods to generate digital works of technology, covering AI-generated text, images and sound, but also synthetic human voice, face replacement, simulation space and other digital products - this is mainly in the field of AI in the past two years. -This is mainly the application of the AI field out of the loop two years ago, but also able to include AIGC in the scope of compliance.
The specifics in the Regulations are largely a necessary extension of previous regulatory requirements. For example, information services have traditionally been required to ensure that the information security attributes of their products do not infringe on the privacy, portrait rights, personality rights, or intellectual property rights of others; these principles now also apply to deeply synthetic products or services per the Regulations. Similarly, the Regulations also require content platforms to fulfill their audit obligations to ensure that deeply synthesized works published on their platforms comply with laws and regulations and do not endanger national security and social stability.
From the existing new regulations and jurisprudence, we can see that although there are currently controversies in copyright, personality rights, and content security for virtual people, these controversies are nothing to fight about in most cases. There is nothing new on the line issue. All the industry needs is a clear set of compliance principles and liability determinations, and then to get on with the job in their respective roles.
A spokesperson for Shangtang Technology, a leader in AI digital human technology, told Tiger: "The Regulations give a division of responsibility that is more in line with current business logic, providing 'technology supporters', 'content providers' and other industry players with clearer compliance expectations, and it also helps regulate the market and boost consumer confidence."
Don't be carried away by sci-fi
There's a market for talk of worrying about the ethical risks of avatars. But the role of the wishy-washy is not appropriate for government regulation to lead the way. The exploration of social responsibility of virtual humans still needs to be done more spontaneously by companies and the market. There is a lot of room for corporate social responsibility here.
Here's a freshly minted counterpoint: the EU's Artificial Intelligence Law. The law, drafted in 2021, is due to be voted on in the European Parliament at the end of March (a few days after this article goes to press).
But it doesn't matter if we vote or not. Because the ChatGPT has already subverted part of the original purpose of the AI Law. For example, the EU AI Law proposes to ban some AI applications that are contrary to human rights, such as certain face recognition applications. But it is difficult to determine the riskiness of a general-purpose generative AI like ChatGPT using the original legislative thinking. In the spirit of the Artificial Intelligence Act, the EU could be tempted to classify GPT as "high-risk AI". But this is tantamount to stagnation.
In fact, even before ChatGPT was introduced, MEPs had already raised objections that the AI Law was too detailed in regulating the application of AI and would hinder the EU's technological innovation.
An AI ethics researcher told Tiger that in fact, EU legislation has been substantially hindering the development of AI technology since the General Data Protection Regulation (GDPR), which came into force in 2018. Many technology companies have struggled to achieve technological innovation due to the restrictions of this data security law.
The AI Law is another example of the EU legislature's style of action: intervene quickly, protect human rights, and maintain a full distrust of technology. The EU's hasty legislative regulation at a stage when AI technology is still evolving rapidly has led the AI Law to the awkward situation it is in today. This suggests that the governance of the ethical aspects of AI at this stage still needs to be led by business.
Although it is not appropriate for the legislative and executive branches to intervene too much in the field of AI governance, the conservative stance of the law is consistent.
Specifically on the subject of avatars, the law should not treat AI and avatars as people for the foreseeable future. In other words, even if AI is perfected, the law will not consider it to have dynamic and autonomous motives; the various risks involved in AI and virtual humans will eventually be traced to the relevant individuals or organizations.
This may seem very unromantic. The artificial intelligence creations in various science fiction works, and even the virtual idols and virtual internet celebrities in the real market, are all created as if they have their own independent thoughts and personalities and can make autonomous decisions. But from the mainstream attitude of the current jurisprudence, the law will not treat the virtual person as the subject of responsibility. All the mistakes made by the avatars will have to be borne by the technology providers, content providers, operators and other entities behind them. We cannot expect any liability or economic and political rights for the virtual person anytime soon. A virtual person is a work, just like the eggs you fry this morning are a work. The work should not be associated with "jail time" in the first place. If something goes wrong with your work, it is your responsibility.
At this point, we can reduce the romantic question of "how to give a qualified soul to a virtual person" to a question of corporate governance and social responsibility.
Responsible AI, a corporate governance topic
After several years of discussion and accumulation, the general ethical requirements of human beings for AI are usually summarized as the concepts of "responsible AI" or "trustworthy AI". Both expressions can also be applied to various sub-products of AI. As in the case of the standard to be developed by the ICT Academy, it is about "trusted virtual human content generation".
The general requirement of "responsible" or "trustworthy" can be broken down into several specific principles. Different technology companies have different ways of breaking it down, but basically it's pretty much the same. Take the industry pioneer Microsoft, for example, which summarizes responsible AI into six principles: fairness, security and reliability, privacy and data security, inclusiveness, transparency, and accountability.
In "Wandering Earth", for example, the MOSS AI concludes that "the best option to preserve human civilization is to destroy humanity" through calculations that are beyond human comprehension. From the principle of credible AI, MOSS is first of all opaque, a black box of calculation, it is hard to see which of her tendons is wrong to come to such an unbelievable conclusion; at the same time, it is also unsafe, unreliable and uncontrollable; finally, she is unaccountable, MOSS causes all kinds of problems to the human society in the work, but there is nothing people can do about her.
This AI in the movie is very capable of creating dramatic conflict, but the reality of AI ethics work, rather than hypothesize such a counter-intuitive AI and then feel bewildered, it is better to eliminate the seeds of runaway from the beginning.
The larger technology providers in the industry chain usually have a dedicated AI ethics department, and they develop technical solutions that promote fairness, security, transparency, and other characteristics based on a split of the principles of trustworthy AI, and open source the solutions. For example, the figure below shows some of the algorithms that IBM has open sourced to promote AI fairness.

But technology is far from the whole story. As the Trusted AI White Paper from the Academy of ICT and the Jingdong Exploration Institute says, there is no perfect technology; the key is how to use it.
Trusted AI is also a set of ways of working for companies, and indeed, a corporate culture. Microsoft's Office of Responsible AI website reveals that the team designs and develops AI products around the aforementioned Six Principles of Responsible AI, analyzes possible problems and omissions, and continuously monitors performance in operations. Microsoft also recommends that AI operations users do the same.
Internal brainstorming alone isn't enough. As Mira Murati, OpenAI's CTO, said in an interview, because GPT is a very general tool, it's actually difficult for developers to know all of its potential impacts and flaws in advance. That's why the company has opened up some of GPT's features to the public to explore potential problems with the technology.
What are the particular governance issues of trusted avatars
Does incorporating AIGC's avatars specifically raise any special governance issues? Some social responsibility thinking currently exists in the industry, except that the degree of systematization and industry consensus on these thoughts is somewhat lower. In this regard, ShangTang Technology, as one of the main compilers of the Technical Requirements for Trusted Virtual Human Generation Content Management System, revealed some initial thoughts on participating in the project to Tiger.
Regarding the special issue of trusted avatars, ShangTang mentioned that when AIGC technology is used for avatar generation, especially when it has intelligent interaction ability like ChatGPT, avatars can achieve the effect of "hyper-realism". The virtual human, which is hard to distinguish from the real one, may amplify the related risk and aggravate the possible harm to the person concerned. In this case, in addition to the current AIGC management method, responsible companies can use a series of technical solutions to strengthen the trustworthiness of avatars. The company hopes to advance an industry-wide consensus on this issue this year.
According to Shangtang Technology, in the Virtual Reality and Metaverse Industry Alliance (XRMA), member companies agree that the issue of "trustworthiness" is as important as industrial development, and that technological advancement and social responsibility cannot be neglected. Another consensus is that "avatars are not decoupled from reality" - AIGC is not decoupled from the will of real people, the ownership of avatars is not decoupled from the constraints of reality, and the industry is not decoupled from regulation. These consensus is expected to be further elaborated in the "Trusted Avatar AIGC" and future industry standards.
No comments yet