Author: ERICA SNYDER/MITTR | UNSPLASH
The European Union is drafting new regulations to facilitate legal action against artificial intelligence (AI) providers. As part of Europe’s effort to stop the release of harmful AI systems by developers, a measure was introduced this week that is expected to become law within the next few years. Although some in the IT sector fear it will stifle innovation, advocates for consumers think the bill doesn’t go far enough.
The positive and negative effects of increasingly prevalent AI technology on individuals, communities, and civilizations are well documented. Predictive artificial intelligence systems used to accept or reject loan applications can be less accurate for minorities, while social media algorithms amplify false information.
The AI Liability Directive, as the new legislation is called, will give the EU’s AI Act, which is expected to become EU law around the same time, some much-needed teeth. Systems for law enforcement, labor force development and medical care are all examples of “high-risk” applications of AI that could be subject to further oversight under the proposed AI Act.
If an AI system causes harm to a person or business, it will be able to file a lawsuit against the responsible party under the new liability legislation. The goal is to ensure that everyone involved in creating, deploying, and utilizing AI systems is held to account by requiring them to detail the design and training processes used. When tech businesses in the European Union (EU) don’t play by the rules, ordinary citizens can file class actions on their behalf.

For instance, if a job applicant can show that an AI system used to screen resumes discriminated against them, the court could order the AI company to provide them with access to information about the system so they can determine who is to blame and why. If they have this evidence, they have a solid case in court.
It will take at least a few years for the plan to make its way through the EU’s legislative process. The European Parliament and EU countries will make changes to it, and they will undoubtedly face significant lobbying from IT businesses that say the proposed regulations will have a “chilling” effect on innovation.
According to Mathilde Adjutor, Europe’s policy manager for the tech lobbying group CCIA, which represents businesses like Google, Amazon, and Uber, the measure could have a negative influence on software development
Mish Michaels Cause of Death: Just How Did Joanna Moore Die?
Developers “risk being accountable for software faults and the possible impact of software on users’ mental health” under the new guidelines, she argues.
Given AI’s potential for discrimination, the bill’s power shift from corporations to consumers is welcomed by Imogen Parker, associate director of policy at the AI research institute Ada Lovelace. Thomas Boué, head of European strategy for tech lobby BSA, whose members include Microsoft and IBM, said the law would ensure that there is a uniform approach to seeking compensation across the EU when an AI system causes harm.
But other consumer rights groups and campaigners are saying the proposals don’t go far enough and would make it too difficult for customers to file claims.
The European Consumer Organization’s deputy director general, Ursula Pachl, called the idea a “big letdown” since it would place the burden of proof on individual customers to show that an AI system caused them harm or that the creator was negligent.
Skylar Astin Net Worth, Age, Height, Ethnicity, Religion & Political Views
Pachl warns that “in a future of increasingly complicated and obscure ‘black box’ AI systems,” it will be “practically impossible” for the user to implement the new regulations. She gives the example of how hard it will be to prove that someone was treated unfairly because of their race because of the way credit scores are calculated.
As Claudia Prettner, EU representative at the Future of Life Institute, a non-profit that focuses on existential AI risk, points out, the law also fails to account for indirect effects created by AI systems. Prettner elaborates that a more ideal version would operate similarly to current regulations concerning vehicles and animals by holding corporations liable for harm caused by their acts regardless of culpability.
“Artificial intelligence systems are frequently developed for a specific goal but result in unintended negative consequences elsewhere. She cites the example of social media algorithms that were designed to increase user engagement but ended up promoting divisive material.
The European Union (EU) intends for its Artificial Intelligence Act to serve as an example for other countries to follow. It is being keenly watched by countries like the United States, where attempts are being made to regulate the technology.
Businesses that improperly acquired data have been ordered to destroy their algorithms, and the Federal Trade Commission is currently drafting regulations to govern how businesses handle data and develop algorithms. In January of this year, Weight Watchers was ordered to do so by the government after the corporation improperly collected data on minors.
The success or failure of this new EU legislation will have far-reaching consequences for the global regulation of artificial intelligence. The EU’s residents, businesses, and regulators all benefit from well-thought-out responsibility for AI. Without it, adds Parker, “we cannot make AI useful to humans and society.
Read more:-
What is Marcia Gay Harden’s Net Worth? (Updated)
What Happened to Atlas Monroe’s Net Worth After “Shark Tank” in 2022?