MOTION FOR A RESOLUTION BY THE COMMITTEE ON ARTIFICIAL INTELLIGENCE IN A DIGITAL AGE (AIDA)
Machine Learning, Human Unlearning: The transition into the digital age raises concerns about the discriminatory biases of artificial intelligence against racial and ethnic minority groups. How can the EU combat bias in algorithms to make sure that the racial discrimination of our current society is not replicated into the digital world?
Submitted by: Hugo Philipsen, Sofya Valeeva, Quinn Purmer, Joelle ten Cate and Kasper Feremans (BE, Chairperson),
The European Youth Parliament aims to limit the usage of Black-box AI models and regulate the input of AI models in order to improve the trustworthiness of AI and minimise the harm caused to ethnic minorities by biassed AI’s,
because
- There has been growth in controversies surrounding AI, particularly questioning the decision making process and discriminatory biases,
- A lot of AI models replicate racial or ethnic biases found in society, for example face recognition performs worse on ethnic minorities,
- The decision making process of the best performing AI models are opaque, making it difficult to detect unwanted biases,
- Even when unwanted features are removed from the data an AI model is trained on, they might still affect the model by way of proxies,
- Some bias is required in an AI model to extrapolate information and make better informed decisions,
- Eradicating the bias of AI gives rise to moral contradictions as the goal of improving fairness conflicts with the goal of improving efficiency and accuracy.
by
- Requests the European Commission to enforce the creation of a watermark and disclaimer to be included on all products generated with the help of an AI model, underscoring their potential bias;
- Instructs AI watch to create, maintain and oversee a general database of training data sensitive to discriminatory biases which is to be included in all AI models;
- Calls upon Member States to prohibit highly unexplainable ‘black-box’ AI models in all fields except for academia;
- Urges the European Commission to update the Artificial Intelligence Act (AIA) by requiring the inclusion of a supplementary XAI (explainable AI) with every black-box AI model, giving insight into the decision making process of black-box models;
- Urges Member States to prohibit the use of any AI models in all levels of the justice system.
ANNEX: DEFINITIONS BELONGING TO THE MOTION FOR A RESOLUTION BY THE COMMITTEE ON ARTIFICIAL INTELLIGENCE IN A DIGITAL AGE (AIDA)
For the purposes of this resolution:
- Black-box AI is the collective term for AI models which provide output in an unexplainable way, i.e. without giving any insight into the decision making process of the AI model.
- XAI is the term used to describe a meta-level AI model that tries to model the decision making processes of a black-box AI model, but operates in an explainable way.
- Proxies are attributes of training data with a high correlation to unwanted attributes that are not explicitly discriminatory.