MOTION FOR A RESOLUTION BY THE COMMITTEE ON CIVIL LIBERTIES, JUSTICE AND HOME AFFAIRS 2
A major concern of machine learning algorithms, is that they might perpetuate bias that was already in the data used to set up the algorithm. As such, to what extent should the EU intervene to ensure the fair and equal treatment of all its citizens, whilst considering the complex nature and ambiguity of many of these algorithms?
Submitted by: Dure Afroz (NL), Amélie Beenhakkers (NL), Laura Dominicy (LU), Danielle Kok (NL), Juliëtte Kok (NL), Áron van der Meer (NL), Finn Russell (NL), Hayat Solmaz (TR), Raphael Tsiamis (Chairperson, GR)
The European Youth Parliament,
- Convinced of the connection between AI bias1 and existing social prejudices due to the skewed representation of socio-economic groups in AI training data,
- Noting the direct correlation between the limited representation of minorities in AI training data and the lack of diversity in the field of AI development and applications,
- Emphasising the significant social repercussions of biased AI systems on the inclusivity and the proper functioning of governmental duties such as healthcare systems and public administration,
- Alarmed by the limited human oversight of the output of automated decision-making by actors developing and implementing AI,
- Recognising the lack of transparency in the output of automated decision-making as a result of the complexity of AI algorithms,
- Aware of the policy challenge of regulating AI technologies due to the rapid development and intricacy of machine-learning processes,
- Taking into account Member States’ desire to facilitate AI innovation through a preference for soft law2 measures on ethical AI over direct regulation,
- Noting with concern that the trade secrecy policies among AI companies regarding their algorithms result in:
- lack of transparency,
- unwillingness for cooperation,
- Concerned by the limited action of companies developing and applying AI regarding its ethical implementation and potential discrimination,
- Disappointed by the limited investment in ethical AI systems due to the perception of socially responsible practices as not profitable;
Bias in AI Development
- Encourages companies developing AI technologies to actively combat bias in algorithms by:
- working towards a more equal representation of socio-economic groups in data sets used in the development of AI algorithms,
- testing the implementation of AI products on a more diverse range of training groups before releasing them in the market,
- setting up departments specifically tasked with monitoring the ethical implementation of their AI algorithms and researching potential misrepresentation of minorities,
- providing data to surveys on AI bias by international organisations researching ethical AI;
- Suggests that Member States promote the engagement of minority students in the development and implementation of AI through:
- scholarships funded by Erasmus+3,
- public awareness campaigns about the need for diversity in AI,
- educational programmes in schools developed by National Ministries;
- Designates the Directorate-General for Communications Networks, Content and Technology (DG CONNECT4) to expand the responsibilities of the High-Level Expert Group on AI (AI HLEG5) to include:
- auditing and approving datasets used in artificial intelligence projects under a non-disclosure agreement6,
- supporting European AI companies in creating more diverse and representative datasets for AI projects,
- ensuring the compliance of AI projects with the Ethics Guidelines for trustworthy AI7,
- proposing policy updates to the European AI strategy on a biannual basis,
- supplementing European AI companies in the detection of vulnerabilities and bias in their AI systems;
Supervision of AI
- Instructs Member States to reduce their AI dependence in areas identified as ‘high-risk’8 by requiring human supervision of any automated decision-making;
- Endorses Member States to continue supporting AI innovation in areas not covered by the shared EU competences through the national promotion of the Ethics guidelines for trustworthy AI;
Ethical responsibility of companies
- Recommends that the Directorate-General for Justice and Consumers9 promote socially responsible company policies for ethical AI by:
- subsidising European companies developing AI in accordance with the Ethics guidelines for trustworthy AI,
- funding workplace training on ethical AI,
- issuing a European certification label for companies adhering to principles of ethical AI;
- Asks the DG CONNECT to increase the transparency and reduce the vulnerabilities of AI systems by funding research in explainable artificial intelligence10.
Footnotes:
- AI bias or algorithm bias is a phenomenon that occurs when an algorithm produces systemically prejudiced results due to erroneous assumptions or data biases in the machine learning process.
- The term ‘soft law’ refers to non-binding legal instruments, such as subsidies, which aim to incentivise stakeholders towards a set goal instead of regulating their actions through specific measures.
- Erasmus+ is the EU’s programme for supporting growth, employment, and social inclusion in Europe through a focus on education and training for the youth.
- The Directorate-General for Communications Networks, Content and Technology (DG CONNECT) is the department of the Commission responsible for developing and implements policies to make Europe fit for the digital age.
- The High-Level Expert Group on AI (AI HLEG) is a group of 52 AI experts working under DG CONNECT to advise the implementation of the European AI Strategy.
- A non-disclosure agreement is a legal contract that renders confidential the dissemination of specified information shared between two or more parties.
- The document ‘Ethics guidelines for trustworthy artificial intelligence’ was prepared by the AI HLEG to improve the quality, safety, and trustworthiness of AI.
- The ‘high-risk’ areas in AI are healthcare, transport, police, recruitment, and the legal system, as considered in the position paper submitted in 2020 by 14 Member States as a response to legislative initiatives on AI by the European Commission.
- The Directorate-General for Justice and Consumers is the department of the European Commission responsible for European policy on justice, consumer rights.
- Explainable AI refers to methods and techniques in the application of AI that enable human experts to understand the output of automated decision-making.