Share Dialog
Share Dialog

Subscribe to sheandher

Subscribe to sheandher
<100 subscribers
<100 subscribers
-The EU intends to require companies developing generative AI tools such as ChatGPT to disclose whether they are using copyrighted material in their systems. The agreement paves the way for the world's first comprehensive AI law, the Artificial Intelligence Act, which is expected to have a global impact.
On April 27, local time, after months of intense negotiations, members of the European Parliament have bridged their differences to reach a tentative political agreement on proposals for The AI Act, which would require companies deploying generative AI tools such as ChatGPT to disclose the copyrighted material used to develop their systems. The agreement paves the way for the world's first comprehensive AI law.
The text of the proposal may still need to be fine-tuned at a technical level before a key committee vote scheduled for May 11, with a full vote expected in mid-June.
Last-Minute Changes: Generative AI Regulation
The Artificial Intelligence Act is expected to be a landmark piece of EU legislation that has been in the works for more than two years. Lawmakers propose to categorize different AI tools according to what they consider to be risk levels, from minimal to limited, high and unacceptable. Governments and companies that use these tools would have different obligations based on the risk level.
The bill is broad in scope and would govern all providers of AI products or services, covering systems that can generate content, predictions, recommendations or decisions that affect the environment.
In addition to the use of AI by businesses, the bill will also look at AI used by the public sector and law enforcement and work in concert with laws such as the General Data Protection Regulation (GDPR). Users of AI systems that interact with humans, are used for surveillance purposes or generate "deep falsified" content will face strict transparency obligations.
Until the last minute, EU lawmakers were debating some of the most controversial parts of the proposal.
"General Purpose AI System" is a category proposed by lawmakers to explain AI tools with multiple applications, such as generative AI models like ChatGPT. How to deal with "General Purpose AI Systems" has been a hotly debated topic in the discussion. The European Parliament has confirmed its previous proposal to impose stricter requirements on a subcategory of "general purpose AI systems" - the base models. Under the requirements, companies developing generative AI tools such as ChatGPT must disclose whether they are using copyrighted material in their systems.
The only significant change on the eve of the agreement is regarding generative AI models, "which must be designed and developed in compliance with EU law and fundamental rights, including freedom of expression."
The AI Law also prohibits "purposeful" manipulation. The term "purposeful" is controversial because it may be difficult to prove intentionality, but it has been retained.
In addition, in the areas of law enforcement, border management, the workplace and education, the bill calls for a ban on AI-powered emotion recognition software.
EU lawmakers have extended the ban on predictive policing from criminal to administrative offences, based on the Dutch child welfare scandal in which thousands of families were wrongly convicted of fraud due to flawed algorithms.
High-risk classification changes
Many artificial intelligence tools may be considered high-risk, such as those used for critical infrastructure, law enforcement or education. They are one level below "unacceptable" and therefore would not be completely prohibited, but would require a high degree of transparency in their operation. Users of high-risk AI may need to complete rigorous risk assessments, document their activities, and provide data to authorities for review. This may increase compliance costs for companies.
The initial proposal automatically categorized AI solutions in certain key areas and use cases as high-risk, meaning that vendors would have to comply with a stricter regime, including requirements for risk management, transparency and data governance. The European Parliament introduced an additional tier so that these categories of AI models would only be considered high risk if they pose a significant risk to health, safety or fundamental rights.
Significant risk is defined as "a risk that is significant because of its severity, intensity, probability of occurrence and duration of impact, and that is capable of affecting a person, multiple persons or a specific group of persons".
AI used to manage critical infrastructure such as energy grids or water systems would also be classified as high risk if it poses a serious environmental risk, according to the Greens.
In addition, center-left lawmakers secured a provision that recommendation systems on mega online platforms would be considered high-risk under the definition of the Digital Services Act (DSA).
MEPs included additional safeguards for providers of high-risk AI models that process sensitive data (such as sexual orientation or religious affiliation) to detect negative bias. In addition, the assessment must occur in a controlled environment. Sensitive data cannot be transmitted to other parties and must be deleted after the bias assessment. Providers must also document why the data processing occurred.
The National Law Review reported on April 26 that, "The AI Bill will have global implications as it will apply to organizations that provide or use AI systems in the EU; and to providers or users of AI systems located in third countries, including the UK and the US, if the output generated by those AI systems is used in the EU. "
-The EU intends to require companies developing generative AI tools such as ChatGPT to disclose whether they are using copyrighted material in their systems. The agreement paves the way for the world's first comprehensive AI law, the Artificial Intelligence Act, which is expected to have a global impact.
On April 27, local time, after months of intense negotiations, members of the European Parliament have bridged their differences to reach a tentative political agreement on proposals for The AI Act, which would require companies deploying generative AI tools such as ChatGPT to disclose the copyrighted material used to develop their systems. The agreement paves the way for the world's first comprehensive AI law.
The text of the proposal may still need to be fine-tuned at a technical level before a key committee vote scheduled for May 11, with a full vote expected in mid-June.
Last-Minute Changes: Generative AI Regulation
The Artificial Intelligence Act is expected to be a landmark piece of EU legislation that has been in the works for more than two years. Lawmakers propose to categorize different AI tools according to what they consider to be risk levels, from minimal to limited, high and unacceptable. Governments and companies that use these tools would have different obligations based on the risk level.
The bill is broad in scope and would govern all providers of AI products or services, covering systems that can generate content, predictions, recommendations or decisions that affect the environment.
In addition to the use of AI by businesses, the bill will also look at AI used by the public sector and law enforcement and work in concert with laws such as the General Data Protection Regulation (GDPR). Users of AI systems that interact with humans, are used for surveillance purposes or generate "deep falsified" content will face strict transparency obligations.
Until the last minute, EU lawmakers were debating some of the most controversial parts of the proposal.
"General Purpose AI System" is a category proposed by lawmakers to explain AI tools with multiple applications, such as generative AI models like ChatGPT. How to deal with "General Purpose AI Systems" has been a hotly debated topic in the discussion. The European Parliament has confirmed its previous proposal to impose stricter requirements on a subcategory of "general purpose AI systems" - the base models. Under the requirements, companies developing generative AI tools such as ChatGPT must disclose whether they are using copyrighted material in their systems.
The only significant change on the eve of the agreement is regarding generative AI models, "which must be designed and developed in compliance with EU law and fundamental rights, including freedom of expression."
The AI Law also prohibits "purposeful" manipulation. The term "purposeful" is controversial because it may be difficult to prove intentionality, but it has been retained.
In addition, in the areas of law enforcement, border management, the workplace and education, the bill calls for a ban on AI-powered emotion recognition software.
EU lawmakers have extended the ban on predictive policing from criminal to administrative offences, based on the Dutch child welfare scandal in which thousands of families were wrongly convicted of fraud due to flawed algorithms.
High-risk classification changes
Many artificial intelligence tools may be considered high-risk, such as those used for critical infrastructure, law enforcement or education. They are one level below "unacceptable" and therefore would not be completely prohibited, but would require a high degree of transparency in their operation. Users of high-risk AI may need to complete rigorous risk assessments, document their activities, and provide data to authorities for review. This may increase compliance costs for companies.
The initial proposal automatically categorized AI solutions in certain key areas and use cases as high-risk, meaning that vendors would have to comply with a stricter regime, including requirements for risk management, transparency and data governance. The European Parliament introduced an additional tier so that these categories of AI models would only be considered high risk if they pose a significant risk to health, safety or fundamental rights.
Significant risk is defined as "a risk that is significant because of its severity, intensity, probability of occurrence and duration of impact, and that is capable of affecting a person, multiple persons or a specific group of persons".
AI used to manage critical infrastructure such as energy grids or water systems would also be classified as high risk if it poses a serious environmental risk, according to the Greens.
In addition, center-left lawmakers secured a provision that recommendation systems on mega online platforms would be considered high-risk under the definition of the Digital Services Act (DSA).
MEPs included additional safeguards for providers of high-risk AI models that process sensitive data (such as sexual orientation or religious affiliation) to detect negative bias. In addition, the assessment must occur in a controlled environment. Sensitive data cannot be transmitted to other parties and must be deleted after the bias assessment. Providers must also document why the data processing occurred.
The National Law Review reported on April 26 that, "The AI Bill will have global implications as it will apply to organizations that provide or use AI systems in the EU; and to providers or users of AI systems located in third countries, including the UK and the US, if the output generated by those AI systems is used in the EU. "
No activity yet