
Fed's May rate meeting releases dovish signals
In the early hours of Thursday morning Beijing time, the Federal Reserve's Monetary Policy Committee released its May resolution, the Fed decided through the meeting to increase the U.S. federal funds rate by 0.5% (i.e., up 50 basis points) to reduce the high inflation rate in the United States. After the rate increase, the federal funds rate range increased to 0.75% to 1%. This is also the first time in 22 years that the Federal Reserve raised interest rates by 0.5%. At the same time, a...
Deleted Sam Altman Talk Transcript: Open AI also lacks GPUs, cost reduction is top goal
SamAltman's European tour is still going on. Not long ago, in London, he had a closed-door discussion with the CEO of AI company HumanLooop, a company that helps developers build applications on big language models. Raza Habib, the CEO of HumanLoop, recorded the main points of the conversation and made them publicly available on the company's website. But the transcript was subsequently taken down at the request of OpenAI. This, in turn, has heightened curiosity about the conversati...
Apple Vision Pro is not a "savior"
"The Mac brought us into the era of the personal computer, the iPhone brought us into the era of mobile computing, and Vision Pro will bring us into the era of spatial computing!" These were the opening words of Apple CEO Tim Cook as he introduced the Vision Pro, the MR device announced today, and the highest expectations for a product that has been eight years in the making and repeatedly jumped the gun. MR was undoubtedly the topic that received the most attention from the outside world at ...

Fed's May rate meeting releases dovish signals
In the early hours of Thursday morning Beijing time, the Federal Reserve's Monetary Policy Committee released its May resolution, the Fed decided through the meeting to increase the U.S. federal funds rate by 0.5% (i.e., up 50 basis points) to reduce the high inflation rate in the United States. After the rate increase, the federal funds rate range increased to 0.75% to 1%. This is also the first time in 22 years that the Federal Reserve raised interest rates by 0.5%. At the same time, a...
Deleted Sam Altman Talk Transcript: Open AI also lacks GPUs, cost reduction is top goal
SamAltman's European tour is still going on. Not long ago, in London, he had a closed-door discussion with the CEO of AI company HumanLooop, a company that helps developers build applications on big language models. Raza Habib, the CEO of HumanLoop, recorded the main points of the conversation and made them publicly available on the company's website. But the transcript was subsequently taken down at the request of OpenAI. This, in turn, has heightened curiosity about the conversati...
Apple Vision Pro is not a "savior"
"The Mac brought us into the era of the personal computer, the iPhone brought us into the era of mobile computing, and Vision Pro will bring us into the era of spatial computing!" These were the opening words of Apple CEO Tim Cook as he introduced the Vision Pro, the MR device announced today, and the highest expectations for a product that has been eight years in the making and repeatedly jumped the gun. MR was undoubtedly the topic that received the most attention from the outside world at ...

Subscribe to taptap

Subscribe to taptap
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
The global AI legislative process is clearly accelerating, with regulation around the world catching up with the speed of AI evolution.
On June 14, the European Parliament passed the draft negotiating mandate of the AI Act with 499 votes in favor, 28 votes against and 93 abstentions. In accordance with EU legislative procedures, the European Parliament, EU member states and the European Commission will begin "tripartite negotiations" to determine the final terms of the Act.
The European Parliament said it is "ready to negotiate the first ever AI Act". U.S. President Joe Biden signals control of AI, and a U.S. congressman submits proposed AI regulation legislation. U.S. Senate Democratic Leader Chuck Schumer presented his "Framework for AI Security Innovation" and plans to develop a federal-level AI bill in just "a few months.
The company is also on the agenda, with a draft AI law ready to be submitted to the Standing Committee of the National People's Congress for consideration this year. on June 20, the first batch of domestic deep synthesis service algorithms for the record list has also been released, Baidu, Alibaba, Tencent and other 26 companies, a total of 41 algorithms on the list.
Although China, the United States and the European Union all advocate the principle AI regulatory concepts of accuracy, security and transparency, there are many differences in the specific ideas and approaches. The enactment of comprehensive AI laws is behind the export of their own rules, and the desire to grasp the rules advantage.
Some domestic experts have called for legal regulation of AI as soon as possible, but the current realistic difficulties faced cannot be ignored. There is also an important consideration: to regulate or to develop. This is not a dichotomous choice, but in the digital field, balancing the two is quite difficult.
EU Sprint, China and US Speed Up
If all goes well, the Artificial Intelligence Bill passed by the European Parliament is expected to be approved by the end of this year. The world's first comprehensive AI regulatory law is likely to land in the EU.
"The draft will influence other countries that are on the fence to accelerate their legislation. It has always been controversial whether AI technology should be included in the rule of law regulation. Now it seems that after the Artificial Intelligence Bill is landed, relevant network platforms, such as those whose business content is mainly generated by user information, will inevitably assume a higher obligation to review." Zhao Jingwu, an associate professor at Beijing University of Aeronautics and Astronautics Law School, told China Newsweek.
As part of its digital strategy, the EU hopes to comprehensively regulate AI through the AI Bill, and the strategic layout behind it has been put on the table.
Peng Xiaoyan, executive director of Beijing Wanshang Tianqin (Hangzhou) Law Firm, told China Newsweek that the Artificial Intelligence Act applies not only to the EU, but also to system providers or users located outside the EU, but whose system output data is used in the EU. It greatly expands the scope of jurisdictional application of the bill, and also gives a glimpse of the end of the jurisdictional scope of the data element seizure.
In the article "The World's First AI Legislation: The Difficult Balance between Innovation and Regulation", Jin Ling, a researcher and deputy director of the European Institute of the Chinese Academy of International Studies, also wrote that the AI Bill highlights the moral advantages of AI governance in the EU, which is another attempt of the EU to exert its normative power and make up for the technical shortcomings through the advantage of rules. This reflects the EU's strategic intent to seize the moral high ground in the field of AI.
The AI Act has been in the making for two years, and in April 2021, the European Commission proposed AI legislation based on a "risk classification" framework, which has since been discussed and revised in several rounds. After the popularity of generative AI such as ChatGPT, EU lawmakers urgently added another "patch".
In a new twist, the latest draft of the AI Bill strengthens transparency requirements for general purpose AI. For example, generative AI based on basic models must label the generated content to help users distinguish between deep falsification and real information, and to ensure that illegal content is prevented from being generated. Providers of base models like OpenAI, Google, and others that use copyrighted data during the training of their models are also required to disclose details of the training data.
In addition, real-time remote biometrics in public places has been changed from a "high risk" level to a "prohibited" level, meaning that AI technology cannot be used for face recognition in public places in EU countries.
The latest draft also further increases the penalties for violations, from a maximum of €30 million or 6% of the infringing company's global turnover for the previous fiscal year to a maximum of €40 million or 7% of the infringing company's global annual turnover for the previous year. This is quite a bit higher than the maximum fine of 4% of global revenue or €20 million under Europe's landmark data security law, the General Data Protection Regulation.
Peng Xiaoyan told China Newsweek that the increase in penalty amount side-by-side reflects the EU authorities' determination and strength to regulate artificial intelligence. For Google, Microsoft, Apple and other technology giants with hundreds of billions of dollars in revenue, fines could reach tens of billions of dollars if they violate the provisions of the Artificial Intelligence Act.
And across the pond in the United States, as Washington was busy responding to calls from Musk and others for stronger AI controls, President Joe Biden met with a group of AI experts and researchers in San Francisco on June 20 to discuss how to manage the risks of the new technology. Biden said at the time that while seizing AI's enormous potential, the risks it poses to society, the economy and national security need to be managed.
The context in which risk management has become a hot topic in AI is that the U.S. has not taken as tough a step towards AI technology as antitrust and has yet to introduce federal-level, comprehensive AI regulatory laws.
The U.S. federal government's first formal foray into AI regulation was in January 2020, when it released the AI Application Regulation Guide to provide guidance on regulatory and non-regulatory measures for emerging AI issues. the National AI Initiative Act of 2020, introduced in 2021, is more of a policy layout in the AI field, with AI governance and strong regulation still some distance away. A year later, the AI Bill of Rights Blueprint (the "Blueprint"), released by the White House in October 2022, provides a supportive framework for AI governance, but is not an official U.S. policy and is not binding.
Little progress has been made on U.S. AI legislation, which has already drawn much discontent. Many have criticized that the U.S. has fallen behind the EU and China in terms of rule-making for the digital economy. However, perhaps seeing that the EU AI Act is about to pass its final "hurdle," the U.S. Congress has recently shown signs of legislative acceleration.
On the day of Biden's AI meeting, Democratic Representatives Ted W. Lieu and Anna Eshoo, along with Republican Representative Ken Buck, submitted a proposal for the National Artificial Intelligence Council Act. Meanwhile, Democratic Senator Brian Schatz (D-NY) will introduce companion legislation in the Senate, focusing together on AI regulatory issues.
According to the bill, the AI Commission will consist of a total of 20 experts from government, industry, civil society and computer science, and will review current U.S. approaches to AI regulation and work together to develop a comprehensive regulatory framework.
"AI is doing amazing things in society. If left unchecked and unregulated, it can cause significant harm. Congress must not stand idly by." Ted Lieu said in a statement.
A day later, on June 21, Senate Democratic Leader Chuck Schumer (D-N.Y.) gave a speech at the Center for Strategic and International Studies (CSIS) to reveal his "Framework for Secure Innovation in Artificial Intelligence" (the "AI Framework") ") - which encourages innovation while advancing security, accountability, foundations and interpretability - echoes macro plans, including the Blueprint. He had proposed the framework in April, but the details were largely undisclosed at the time.
Behind the AI framework is one of Chuck Schumer's legislative strategies. In his speech, he said he wanted to develop a federal-level AI bill in just "a few months. However, the U.S. legislative process is cumbersome, not only through the House and Senate to vote, but also after several rounds of hearings, taking a long time.
To speed up the process, Chuck Schumer plans to hold a series of AI insight forums as part of the AI framework, covering 10 topics, including innovation, intellectual property, national security and privacy, starting this September. He told the outside world that the insight forums will not replace congressional hearings on AI, but rather run in parallel so that the legislature can introduce policy on the technology in a matter of months rather than years. He predicted that U.S. AI legislation may not begin to see anything concrete until the fall.
In early June, the General Office of the State Council issued the State Council's 2023 legislative work plan, which mentioned that the draft artificial intelligence law and other preparations for consideration by the Standing Committee of the National People's Congress.
According to the provisions of China's "Legislative Law", the State Council to the Standing Committee of the National People's Congress proposed draft laws, the chairman of the meeting decided to include in the agenda of the Standing Committee meeting, or first to the relevant special committee to consider, report, and then decided to include in the agenda of the Standing Committee meeting, the follow-up generally need to go through three deliberations before delivery to vote.
Since this year, many countries have speeded up AI legislation, which Peng Xiaoyan believes is the result of competition and technological development together to promote the heat.
"The data element is increasingly becoming a national strategic element, and countries also hope to establish jurisdiction through legislation to seize the AI discourse. At the same time, ChatGPT and other iterations of AI technology updates, so that society sees new hope for the development of strong AI. The development of new technologies will inevitably bring new social problems and social contradictions that require regulatory intervention to adjust, and the development of technology has somehow promoted the renewal of legislation." Peng Xiaoyan said.
Divergence far more than convergence
China, the U.S. and the EU are the main drivers of global AI development, but there are some differences in AI legislation among the three.
The EU AI Act classifies the risk of AI applications into four tiers from the perspectives of use and function, etc. Regardless of the several rounds of draft amendments, "risk classification" remains the core concept of AI governance in the EU.
Divergence far more than convergence
China, the United States, and the European Union are the main drivers of global AI development, but there are some differences in AI legislation among the three.
The EU AI Act classifies the risk of AI applications into four tiers from the perspectives of use and function, etc. Regardless of the several rounds of draft amendments, "risk classification" remains the core concept of AI governance in the EU.
The top of the pyramid corresponds to an "unacceptable" risk to human security. For example, scoring systems that classify people based on their social behavior or personal characteristics would be banned altogether.
In the latest draft, the European Parliament has expanded the list of "unacceptable risks" to prevent AI systems from being intrusive and discriminatory. Six categories of AI systems, such as biometrics in public space, emotion recognition, predictive policing (based on profiling, location, or past criminal behavior), and randomly capturing facial images from the Internet, are banned altogether.
The second category is AI systems that negatively impact human safety or fundamental rights and would be considered "high risk. For example, AI systems used in products such as aviation, automobiles, and medical devices, as well as eight specific areas that must be registered in the EU database, covering critical infrastructure, education, training, and law enforcement. Subject to AI regulations and prior conformity assessment, various "high-risk" AI systems will be authorized to comply with a series of requirements and obligations to enter the EU market.
In addition, AI systems that influence voters and election results, as well as recommendation systems used by social media platforms with more than 45 million users under the EU Digital Services Act, such as Facebook, Twitter and Instagram, will also be included in the high-risk list.
At the bottom of the pyramid are AI systems with limited risk, little or no risk. The former have specific transparency obligations and need to inform users that they are interacting with AI systems, while the latter have no mandatory requirements and are largely unregulated, such as applications like spam filters.
The AI Act is seen by many in the industry as having many sharp "teeth" because of its strict regulatory provisions. However, the bill also attempts to find a balance between strong regulation and innovation.
For example, the latest draft requires member states to establish at least one "regulatory sandbox" that can be used free of charge by SMEs and startups to test innovative AI systems before they are put into use in a supervised and secure scenario until they meet compliance requirements. The EU generally believes that the proposal will not only allow authorities to keep an eye on technological changes in real time, but also help AI companies to continue to innovate while reducing regulatory pressure.
According to Jin Ling in the aforementioned article, the EU's upstream governance approach requires companies to bear more upfront costs on the one hand, and affects their enthusiasm for investment because of uncertainty in risk assessment on the other. Thus, despite the Commission's repeated emphasis that AI legislation will support innovation and growth in Europe's digital economy, realistic economic analysis does not seem to share this conclusion. The bill reflects an inherent conflict in the EU between the difficulty of effectively balancing the promotion of innovation and the protection of rights.
The United States, like the EU and China, supports a largely risk-based approach to AI regulation that advocates accuracy, security, and transparency. However, in Zhao Jingwu's view, U.S. regulatory thinking is more focused on leveraging AI and promoting innovation and development in the AI industry, ultimately to maintain U.S. leadership and competitiveness.
"Unlike the 'risk prevention and technology safety' regulatory philosophy upheld by China and the EU, the U.S. focuses on commercial development first. Both China and the EU focus on AI technology application safety and security to avoid AI technology abuse to infringe on individual rights, while the U.S. is focused on industrial development as the regulatory focus." Zhao Jingwu said.
One study found that U.S. congressional legislation has focused primarily on encouraging and guiding government use of AI. For example, the U.S. Senate had introduced an AI innovation bill in 2021 that would require the U.S. Department of Defense to implement a pilot program to ensure it has access to the best AI and machine learning software capabilities.
Chuck Schumer, in his aforementioned speech, has identified innovation as the North Star, and his AI framework is about unlocking the vast potential of AI and supporting U.S.-led AI technology innovation. The Regulatory Guidance for AI Applications opens with a clear statement that it should continue to promote the advancement of technology and innovation. The ultimate goal of the National AI Initiative Act of 2020 is also to ensure that the U.S. remains a leader in global AI technology through increased investment in research and the creation of workforce systems.
Peng Xiaoyan said that from the perspective of guiding regulatory design, the U.S. legislation and institutional level on AI development is still in a weak regulatory posture, and the social level actively encourages the innovation and expansion of AI technology with an open attitude.
In contrast to the EU, which has more explicit investigative powers and comprehensive regulatory coverage, the US has adopted a decentralized approach to AI regulation, with some states and agencies advancing AI governance to a lesser extent. This has resulted in national AI regulatory initiatives that are very broad and principled.
For example, the Blueprint, a landmark event in U.S. AI governance policy, sets out five basic principles of safe and effective systems, prevention of algorithmic discrimination, protection of data privacy, notice and explanation, and human involvement in decision making, with no more detailed provisions.
According to Peng, the Blueprint does not set out specific implementation measures, but rather builds a basic framework for AI development in the form of principle regulations designed to guide the design, use and deployment of AI systems.
"Specifications such as these are not mandatory, which is a consideration for the U.S. to support the development of the AI industry. At present, artificial intelligence is still in the emerging development stage, high-intensity regulation will inevitably limit the development of industry and innovation to a certain extent, and thus the United States in the legislation to maintain a relatively modest attitude." Peng Xiaoyan said.
"Without laws giving agencies new powers, they will have to regulate the use of AI based on the powers they already have. On the other hand, by keeping the ethical principles related to AI less prescriptive, agencies can decide for themselves how to regulate and what use rights to reserve." This leaves federal agencies, led by the White House, both constrained and free, according to Carnegie analyst Hadrien Pouget.
The use and innovation-led philosophy of AI governance predestines the U.S. to have a less-than-stiff "fist. Alex Engler, a fellow at the Brookings Institution, a leading U.S. think tank, notes that the EU and the U.S. are taking different approaches to regulating AI with social impact in education, finance, and employment.
In terms of specific AI applications, the EU's Artificial Intelligence Act has transparency requirements for chatbots, while there are no federal-level regulations in the United States. Facial recognition is considered an "unacceptable risk" in the EU, while the U.S. provides public information through the National Institute of Standards and Technology (NIST) Face Recognition Vendor Testing Program, but does not mandate rules.
"The EU's regulatory scope not only covers a broader range of applications, but also sets more rules for these AI applications. The U.S. approach, on the other hand, is more narrowly limited to adapting current agency regulators to try to govern AI, and the scope of AI is much more limited." Alex Engler said that despite the existence of broadly identical principles, there is far more divergence than convergence in AI risk management.
Zhao Jingwu summarized the AI regulatory models in China, the EU and the US and found that China is limited by AI technology application scenarios, specifically targeting face recognition technology, deep synthesis, automated recommendations and other application scenarios to develop special regulatory rules. The EU is risk level oriented, based on whether the risk level of AI applications is an acceptable level. The U.S., on the other hand, is judging the legality of AI technology applications in the framework of the established traditional legal system.
In addition, the U.S. is focusing more attention on AI research and investing more money in it. Just in early May, the White House announced an investment of about $140 million to establish seven new national AI institutes. Some researchers believe that the U.S. move is a move to better understand AI and thus alleviate concerns arising from the regulatory process.
Peng Xiaoyan, on the other hand, said that China has taken measures to encourage the development of AI technology while limited regulation of the management of related fields to guide the development of AI technology with a reconciled policy and management requirements.
The global AI legislative process is clearly accelerating, with regulation around the world catching up with the speed of AI evolution.
On June 14, the European Parliament passed the draft negotiating mandate of the AI Act with 499 votes in favor, 28 votes against and 93 abstentions. In accordance with EU legislative procedures, the European Parliament, EU member states and the European Commission will begin "tripartite negotiations" to determine the final terms of the Act.
The European Parliament said it is "ready to negotiate the first ever AI Act". U.S. President Joe Biden signals control of AI, and a U.S. congressman submits proposed AI regulation legislation. U.S. Senate Democratic Leader Chuck Schumer presented his "Framework for AI Security Innovation" and plans to develop a federal-level AI bill in just "a few months.
The company is also on the agenda, with a draft AI law ready to be submitted to the Standing Committee of the National People's Congress for consideration this year. on June 20, the first batch of domestic deep synthesis service algorithms for the record list has also been released, Baidu, Alibaba, Tencent and other 26 companies, a total of 41 algorithms on the list.
Although China, the United States and the European Union all advocate the principle AI regulatory concepts of accuracy, security and transparency, there are many differences in the specific ideas and approaches. The enactment of comprehensive AI laws is behind the export of their own rules, and the desire to grasp the rules advantage.
Some domestic experts have called for legal regulation of AI as soon as possible, but the current realistic difficulties faced cannot be ignored. There is also an important consideration: to regulate or to develop. This is not a dichotomous choice, but in the digital field, balancing the two is quite difficult.
EU Sprint, China and US Speed Up
If all goes well, the Artificial Intelligence Bill passed by the European Parliament is expected to be approved by the end of this year. The world's first comprehensive AI regulatory law is likely to land in the EU.
"The draft will influence other countries that are on the fence to accelerate their legislation. It has always been controversial whether AI technology should be included in the rule of law regulation. Now it seems that after the Artificial Intelligence Bill is landed, relevant network platforms, such as those whose business content is mainly generated by user information, will inevitably assume a higher obligation to review." Zhao Jingwu, an associate professor at Beijing University of Aeronautics and Astronautics Law School, told China Newsweek.
As part of its digital strategy, the EU hopes to comprehensively regulate AI through the AI Bill, and the strategic layout behind it has been put on the table.
Peng Xiaoyan, executive director of Beijing Wanshang Tianqin (Hangzhou) Law Firm, told China Newsweek that the Artificial Intelligence Act applies not only to the EU, but also to system providers or users located outside the EU, but whose system output data is used in the EU. It greatly expands the scope of jurisdictional application of the bill, and also gives a glimpse of the end of the jurisdictional scope of the data element seizure.
In the article "The World's First AI Legislation: The Difficult Balance between Innovation and Regulation", Jin Ling, a researcher and deputy director of the European Institute of the Chinese Academy of International Studies, also wrote that the AI Bill highlights the moral advantages of AI governance in the EU, which is another attempt of the EU to exert its normative power and make up for the technical shortcomings through the advantage of rules. This reflects the EU's strategic intent to seize the moral high ground in the field of AI.
The AI Act has been in the making for two years, and in April 2021, the European Commission proposed AI legislation based on a "risk classification" framework, which has since been discussed and revised in several rounds. After the popularity of generative AI such as ChatGPT, EU lawmakers urgently added another "patch".
In a new twist, the latest draft of the AI Bill strengthens transparency requirements for general purpose AI. For example, generative AI based on basic models must label the generated content to help users distinguish between deep falsification and real information, and to ensure that illegal content is prevented from being generated. Providers of base models like OpenAI, Google, and others that use copyrighted data during the training of their models are also required to disclose details of the training data.
In addition, real-time remote biometrics in public places has been changed from a "high risk" level to a "prohibited" level, meaning that AI technology cannot be used for face recognition in public places in EU countries.
The latest draft also further increases the penalties for violations, from a maximum of €30 million or 6% of the infringing company's global turnover for the previous fiscal year to a maximum of €40 million or 7% of the infringing company's global annual turnover for the previous year. This is quite a bit higher than the maximum fine of 4% of global revenue or €20 million under Europe's landmark data security law, the General Data Protection Regulation.
Peng Xiaoyan told China Newsweek that the increase in penalty amount side-by-side reflects the EU authorities' determination and strength to regulate artificial intelligence. For Google, Microsoft, Apple and other technology giants with hundreds of billions of dollars in revenue, fines could reach tens of billions of dollars if they violate the provisions of the Artificial Intelligence Act.
And across the pond in the United States, as Washington was busy responding to calls from Musk and others for stronger AI controls, President Joe Biden met with a group of AI experts and researchers in San Francisco on June 20 to discuss how to manage the risks of the new technology. Biden said at the time that while seizing AI's enormous potential, the risks it poses to society, the economy and national security need to be managed.
The context in which risk management has become a hot topic in AI is that the U.S. has not taken as tough a step towards AI technology as antitrust and has yet to introduce federal-level, comprehensive AI regulatory laws.
The U.S. federal government's first formal foray into AI regulation was in January 2020, when it released the AI Application Regulation Guide to provide guidance on regulatory and non-regulatory measures for emerging AI issues. the National AI Initiative Act of 2020, introduced in 2021, is more of a policy layout in the AI field, with AI governance and strong regulation still some distance away. A year later, the AI Bill of Rights Blueprint (the "Blueprint"), released by the White House in October 2022, provides a supportive framework for AI governance, but is not an official U.S. policy and is not binding.
Little progress has been made on U.S. AI legislation, which has already drawn much discontent. Many have criticized that the U.S. has fallen behind the EU and China in terms of rule-making for the digital economy. However, perhaps seeing that the EU AI Act is about to pass its final "hurdle," the U.S. Congress has recently shown signs of legislative acceleration.
On the day of Biden's AI meeting, Democratic Representatives Ted W. Lieu and Anna Eshoo, along with Republican Representative Ken Buck, submitted a proposal for the National Artificial Intelligence Council Act. Meanwhile, Democratic Senator Brian Schatz (D-NY) will introduce companion legislation in the Senate, focusing together on AI regulatory issues.
According to the bill, the AI Commission will consist of a total of 20 experts from government, industry, civil society and computer science, and will review current U.S. approaches to AI regulation and work together to develop a comprehensive regulatory framework.
"AI is doing amazing things in society. If left unchecked and unregulated, it can cause significant harm. Congress must not stand idly by." Ted Lieu said in a statement.
A day later, on June 21, Senate Democratic Leader Chuck Schumer (D-N.Y.) gave a speech at the Center for Strategic and International Studies (CSIS) to reveal his "Framework for Secure Innovation in Artificial Intelligence" (the "AI Framework") ") - which encourages innovation while advancing security, accountability, foundations and interpretability - echoes macro plans, including the Blueprint. He had proposed the framework in April, but the details were largely undisclosed at the time.
Behind the AI framework is one of Chuck Schumer's legislative strategies. In his speech, he said he wanted to develop a federal-level AI bill in just "a few months. However, the U.S. legislative process is cumbersome, not only through the House and Senate to vote, but also after several rounds of hearings, taking a long time.
To speed up the process, Chuck Schumer plans to hold a series of AI insight forums as part of the AI framework, covering 10 topics, including innovation, intellectual property, national security and privacy, starting this September. He told the outside world that the insight forums will not replace congressional hearings on AI, but rather run in parallel so that the legislature can introduce policy on the technology in a matter of months rather than years. He predicted that U.S. AI legislation may not begin to see anything concrete until the fall.
In early June, the General Office of the State Council issued the State Council's 2023 legislative work plan, which mentioned that the draft artificial intelligence law and other preparations for consideration by the Standing Committee of the National People's Congress.
According to the provisions of China's "Legislative Law", the State Council to the Standing Committee of the National People's Congress proposed draft laws, the chairman of the meeting decided to include in the agenda of the Standing Committee meeting, or first to the relevant special committee to consider, report, and then decided to include in the agenda of the Standing Committee meeting, the follow-up generally need to go through three deliberations before delivery to vote.
Since this year, many countries have speeded up AI legislation, which Peng Xiaoyan believes is the result of competition and technological development together to promote the heat.
"The data element is increasingly becoming a national strategic element, and countries also hope to establish jurisdiction through legislation to seize the AI discourse. At the same time, ChatGPT and other iterations of AI technology updates, so that society sees new hope for the development of strong AI. The development of new technologies will inevitably bring new social problems and social contradictions that require regulatory intervention to adjust, and the development of technology has somehow promoted the renewal of legislation." Peng Xiaoyan said.
Divergence far more than convergence
China, the U.S. and the EU are the main drivers of global AI development, but there are some differences in AI legislation among the three.
The EU AI Act classifies the risk of AI applications into four tiers from the perspectives of use and function, etc. Regardless of the several rounds of draft amendments, "risk classification" remains the core concept of AI governance in the EU.
Divergence far more than convergence
China, the United States, and the European Union are the main drivers of global AI development, but there are some differences in AI legislation among the three.
The EU AI Act classifies the risk of AI applications into four tiers from the perspectives of use and function, etc. Regardless of the several rounds of draft amendments, "risk classification" remains the core concept of AI governance in the EU.
The top of the pyramid corresponds to an "unacceptable" risk to human security. For example, scoring systems that classify people based on their social behavior or personal characteristics would be banned altogether.
In the latest draft, the European Parliament has expanded the list of "unacceptable risks" to prevent AI systems from being intrusive and discriminatory. Six categories of AI systems, such as biometrics in public space, emotion recognition, predictive policing (based on profiling, location, or past criminal behavior), and randomly capturing facial images from the Internet, are banned altogether.
The second category is AI systems that negatively impact human safety or fundamental rights and would be considered "high risk. For example, AI systems used in products such as aviation, automobiles, and medical devices, as well as eight specific areas that must be registered in the EU database, covering critical infrastructure, education, training, and law enforcement. Subject to AI regulations and prior conformity assessment, various "high-risk" AI systems will be authorized to comply with a series of requirements and obligations to enter the EU market.
In addition, AI systems that influence voters and election results, as well as recommendation systems used by social media platforms with more than 45 million users under the EU Digital Services Act, such as Facebook, Twitter and Instagram, will also be included in the high-risk list.
At the bottom of the pyramid are AI systems with limited risk, little or no risk. The former have specific transparency obligations and need to inform users that they are interacting with AI systems, while the latter have no mandatory requirements and are largely unregulated, such as applications like spam filters.
The AI Act is seen by many in the industry as having many sharp "teeth" because of its strict regulatory provisions. However, the bill also attempts to find a balance between strong regulation and innovation.
For example, the latest draft requires member states to establish at least one "regulatory sandbox" that can be used free of charge by SMEs and startups to test innovative AI systems before they are put into use in a supervised and secure scenario until they meet compliance requirements. The EU generally believes that the proposal will not only allow authorities to keep an eye on technological changes in real time, but also help AI companies to continue to innovate while reducing regulatory pressure.
According to Jin Ling in the aforementioned article, the EU's upstream governance approach requires companies to bear more upfront costs on the one hand, and affects their enthusiasm for investment because of uncertainty in risk assessment on the other. Thus, despite the Commission's repeated emphasis that AI legislation will support innovation and growth in Europe's digital economy, realistic economic analysis does not seem to share this conclusion. The bill reflects an inherent conflict in the EU between the difficulty of effectively balancing the promotion of innovation and the protection of rights.
The United States, like the EU and China, supports a largely risk-based approach to AI regulation that advocates accuracy, security, and transparency. However, in Zhao Jingwu's view, U.S. regulatory thinking is more focused on leveraging AI and promoting innovation and development in the AI industry, ultimately to maintain U.S. leadership and competitiveness.
"Unlike the 'risk prevention and technology safety' regulatory philosophy upheld by China and the EU, the U.S. focuses on commercial development first. Both China and the EU focus on AI technology application safety and security to avoid AI technology abuse to infringe on individual rights, while the U.S. is focused on industrial development as the regulatory focus." Zhao Jingwu said.
One study found that U.S. congressional legislation has focused primarily on encouraging and guiding government use of AI. For example, the U.S. Senate had introduced an AI innovation bill in 2021 that would require the U.S. Department of Defense to implement a pilot program to ensure it has access to the best AI and machine learning software capabilities.
Chuck Schumer, in his aforementioned speech, has identified innovation as the North Star, and his AI framework is about unlocking the vast potential of AI and supporting U.S.-led AI technology innovation. The Regulatory Guidance for AI Applications opens with a clear statement that it should continue to promote the advancement of technology and innovation. The ultimate goal of the National AI Initiative Act of 2020 is also to ensure that the U.S. remains a leader in global AI technology through increased investment in research and the creation of workforce systems.
Peng Xiaoyan said that from the perspective of guiding regulatory design, the U.S. legislation and institutional level on AI development is still in a weak regulatory posture, and the social level actively encourages the innovation and expansion of AI technology with an open attitude.
In contrast to the EU, which has more explicit investigative powers and comprehensive regulatory coverage, the US has adopted a decentralized approach to AI regulation, with some states and agencies advancing AI governance to a lesser extent. This has resulted in national AI regulatory initiatives that are very broad and principled.
For example, the Blueprint, a landmark event in U.S. AI governance policy, sets out five basic principles of safe and effective systems, prevention of algorithmic discrimination, protection of data privacy, notice and explanation, and human involvement in decision making, with no more detailed provisions.
According to Peng, the Blueprint does not set out specific implementation measures, but rather builds a basic framework for AI development in the form of principle regulations designed to guide the design, use and deployment of AI systems.
"Specifications such as these are not mandatory, which is a consideration for the U.S. to support the development of the AI industry. At present, artificial intelligence is still in the emerging development stage, high-intensity regulation will inevitably limit the development of industry and innovation to a certain extent, and thus the United States in the legislation to maintain a relatively modest attitude." Peng Xiaoyan said.
"Without laws giving agencies new powers, they will have to regulate the use of AI based on the powers they already have. On the other hand, by keeping the ethical principles related to AI less prescriptive, agencies can decide for themselves how to regulate and what use rights to reserve." This leaves federal agencies, led by the White House, both constrained and free, according to Carnegie analyst Hadrien Pouget.
The use and innovation-led philosophy of AI governance predestines the U.S. to have a less-than-stiff "fist. Alex Engler, a fellow at the Brookings Institution, a leading U.S. think tank, notes that the EU and the U.S. are taking different approaches to regulating AI with social impact in education, finance, and employment.
In terms of specific AI applications, the EU's Artificial Intelligence Act has transparency requirements for chatbots, while there are no federal-level regulations in the United States. Facial recognition is considered an "unacceptable risk" in the EU, while the U.S. provides public information through the National Institute of Standards and Technology (NIST) Face Recognition Vendor Testing Program, but does not mandate rules.
"The EU's regulatory scope not only covers a broader range of applications, but also sets more rules for these AI applications. The U.S. approach, on the other hand, is more narrowly limited to adapting current agency regulators to try to govern AI, and the scope of AI is much more limited." Alex Engler said that despite the existence of broadly identical principles, there is far more divergence than convergence in AI risk management.
Zhao Jingwu summarized the AI regulatory models in China, the EU and the US and found that China is limited by AI technology application scenarios, specifically targeting face recognition technology, deep synthesis, automated recommendations and other application scenarios to develop special regulatory rules. The EU is risk level oriented, based on whether the risk level of AI applications is an acceptable level. The U.S., on the other hand, is judging the legality of AI technology applications in the framework of the established traditional legal system.
In addition, the U.S. is focusing more attention on AI research and investing more money in it. Just in early May, the White House announced an investment of about $140 million to establish seven new national AI institutes. Some researchers believe that the U.S. move is a move to better understand AI and thus alleviate concerns arising from the regulatory process.
Peng Xiaoyan, on the other hand, said that China has taken measures to encourage the development of AI technology while limited regulation of the management of related fields to guide the development of AI technology with a reconciled policy and management requirements.
No activity yet