Resulting from skewed input data, flawed target variables, or biased trainings implemented into an algorithm’s design, AI bias embeds real-life discrimination and human rights violations into Artificial Intelligence, thus damaging consumers’ trust as well as the future of innovation. To strengthen the EU’s ethical technological development, it is critical to develop human-centric AI frameworks aligned with the EU’s humanitarian values. While AI legislation has been developing rapidly, it is encountering significant points of contention in terms of technological innovation and transparency, as well as an unwillingness for binding regulations by the private sector. Ultimately, with biased AI often reinforcing existing social problems, it is important for European legislators and companies alike to critically examine and rectify potential social repercussions of automated decision-making.
, data mining, , , , Bias ( ; ), , .
‘To become a global leader in innovation in the data economy and its applications’ stands out as the primary target of the European data strategy. As a key point of the EU’s digital future, AI is attracting significant attention due to not only its promising technological contributions, but also its uncontrolled development. Despite the multidisciplinary benefits of AI, several experts and human rights advocates are concerned about instances of discriminatory and biased behaviour by AI systems against persons of a particular gender, race, socio-economic background or sexual orientation, with allegations targeting local enterprises and leading brands alike over the years. For instance, Google Vision Cloud reportedly practised discrimination against people of colour by labelling images differently depending on skin colour, as either an ‘electronic device’ for white users, or a ‘gun’ for black ones.
With similar allegations against Amazon’s recruiting system for discrimination against women, and even against the US healthcare system for favouring white over black patients, AI bias is seen to magnify existing discrimination in society, further disadvantaging underrepresented groups. Thus, the EU needs to follow a human-centric approach focused on protecting human rights, in order to defend social equality and restore citizens’ trust in AI systems.
Stakeholders and Legal Competences
The European Commission (EC) is the executive arm of the EU, responsible for proposing legislation and managing EU policy. For matters related to AI and digital technologies in general, the EC acts through its Directorate-General for Communications Networks, Content and Technology (DG CONNECT), the department responsible for the EU’s Digital Agenda and the .
As a relatively new and rapidly growing policy area, AI, along with its socio-economic implications, lies at the intersection of various shared EU competence areas, i.e., research and development (R&D), consumer protection, justice and security, and the single market. This means that the EC and Member States can both initiate legislations, with EU laws taking precedence. It is especially important for Member States to create their own legislation on areas adjacent to AI bias but not covered by the shared EU competences, such as national health systems.
The European Parliament (EP) is the legislative branch of the EU, responsible for debating the legislative proposals of the EC, particularly through its IMCO, ITRE, JURI and LIBE committees, which regularly discuss AI and its implications, alongside the recently established a Special Committee on Artificial Intelligence in a Digital Age (AIDA). With its own initiative reports, the EP can also formally request that the EC bring forward legislative proposals on topics of shared EU competence.
The European Economic and Social Committee (EESC) is an EU advisory body that unites various economic interest groups of the Internal Market. In its advisory capacity, the EECS offers opinions on the EC’s policy frameworks and puts forward proposals regarding the Digital Single Market and its stakeholders.
The High-Level Expert Group on Artificial Intelligence (AI HLEG) is a team of 52 experts appointed by the EC and tasked with implementing AI-related strategic plans and policy recommendations.
The Council of Europe (CoE) stands out as one of the leading international organisations for human rights. Its Ad Hoc Committee on Artificial Intelligence (CAHAI) is responsible for addressing prominent AI-related concerns and drafting a humane framework regarding the development of AI.
Non-Governmental Organisations (NGOs) and researchers on human rights play a crucial role in monitoring the implementation of AI regulations policies regarding ethical and social issues, as well as influencing the legislative process through networks such as the EU Agency for Fundamental Rights (FRA) and the Human Rights and Democracy Network (HRDN).
The Human Factor
As captured in the words of EC President Ursula von der Leyen, ‘the algorithm is as smart as the data you feed it,’ meaning that the rapid development of AI does not free it from the implicit biases of its human creators. AI bias mirrors the opinions and prejudices in society, inheriting existing biased practices through poorly selected, unconsciously or intentionally biased training data. With human input being a necessary part of AI research, unbiased algorithms need unbiased data sets that address real-life discrimination.
Algorithm experts suggest that this issue runs even deeper than simple human bias, with groups facing AI discrimination often being significantly underrepresented in certain key environments, such as women in technological research groups, and largely overrepresented in others, such as people of colour in the American prison system. With the quantity and diversity of data playing a large role in the eventual workings of an AI algorithm, these systems end up being adversely influenced by extraneous factors, such as the subject’s race and socio-economic status, thus reinforcing existing social inequalities. This self-reinforcing effect is built into the way AI and ML work, thus posing a continuous challenge to ethical AI usage.
Transparency and Accountability vs. Trade Secrecy and AI Innovation
Alongside the human factor being critical in the development of AI, the algorithms themselves are also major deciding factors, with some being more bias-prone than others. Thus, the transparency of AI systems and their training datasets is vital in detecting human rights violations. While advocates of AI transparency insist that publishing the source code and training data of critical AI algorithms need not harm companies and may be essential for identifying high-risk issues, multiple AI developers contend that increased transparency requirements may facilitate replication, thus reducing innovation and investment incentives.
Evidently, corporate secrecy laws, along with the pre-existing intricacy of AI algorithms, render systems unaccountable and obstruct the assessment of bias and the correction of errors, thus being seen as stymying progress. On the other end, opponents of argue that it requires a decrease in complexity, thus sacrificing efficacy and innovation, despite the notable progress made by ethical AI actors. Since AI algorithms rely on highly complex models and data points that evolve with time, even waiving trade secrecy and fully auditing these systems might not yield a complete solution.
Self-Regulation vs. Government Intervention
At the same time, the internal opaqueness of AI is amplified by a lack of credible external evaluation, whereby consumers and users of AI systems are often unable to understand beforehand when and to what extent an algorithm’s decision is biased, despite efforts at . Ultimately, the shifting balance between transparency and competitive advantages for businesses remains a prominent challenge in today’s AI technology that creates highly branched and complex issues.
With competitive market forces in the AI primarily driving the search for new applications and markets, rather than incentivising non-discrimination, experts are increasingly concerned that self-regulation may not be enough to tackle the ethical challenges posed by the development of AI and ensure accountability. Thus, various NGOs have called for increasing regulatory and supervisory power to government agencies, with the World Economic Forum issuing guidelines for such audits, especially relating to facial recognition AI. Even then, the challenge would be shifted to creating national or EU-wide AI safety guidelines and certification models that meet nuanced sectoral expertise requirements to enable innovation, a task that has historically troubled lawmakers.
Policy Options ahead
The White Paper on Artificial Intelligence by the EC offers policy options for a future EU regulatory framework for relevant AI actors, with a particular focus on high-risk applications.
Following the EC report on human-centric AI, the EESC proposed an AI trustworthiness certification scheme, to be issued and supervised by an independent EU agency. The CoE also recommended recently a certification mechanism for AI tools used in the justice and judiciary system.
More than 14 Member States have indicated a strong preference for solutions and self-regulation to most AI systems, with the exception of technologies considered ‘high-risk,’ such as facial recognition and healthcare.
Finally, the EU Mutual Learning Programme in AI Gender Equality of the EC recommended alongside the EESC human resource trainings and knowledge-sharing to understand the extent of AI usage and develop best practices against discrimination.
Food for Thought
While the European Union strives to become a global leader in innovation, it needs an inseparable human-centric approach to preserve its own values within the ever-growing AI race. With AI bias damaging consumers’ trust and backsliding on fundamental human rights of non-discrimination, the EU needs to further develop its strategies on the ethical aspects of Artificial Intelligence. In examining the various dimensions of these strategies, you can consider the following Key Questions:
Taking into account the ‘black-box metaphor’ for the intricacy of AI systems, how can the EU accurately evaluate the effectiveness of AI algorithms for minority users and measure an increase or decrease in AI bias?
Considering the complexity of AI algorithms, can the EU create regulatory frameworks for more diverse and representative AI training data sets and if yes, should it do so?
How can the EU balance transparency frameworks for AI algorithms with freedom for innovation and investment?
In light of the current discriminatory nature of various AI systems, should the EU consider limiting their contribution to important decision-making processes, such as health policy, law enforcement, and the labour market?
How can the EU promote collaboration across stakeholders on technological development as well as human rights, connecting equality monitoring bodies with actors designing and utilising of AI?
‘AI rules: what the European Parliament wants’, Press Release (text) by the European Parliament (2020). Link. An exposition of recent EU legislation on AI with a focus on future directions and links related fields.
‘Parliament leads the way on first set of EU rules for Artificial Intelligence’, Press Release (text) by the JURI committee of the European Parliament (2020). Link and Full Text. An exposition of the legislative initiative of the European Parliament requesting a new legal framework outlining ethical principles and legal obligations for AI.
‘EU struggles to go from talk to action on artificial intelligence’, Opinion piece (text) by Science|Business (2020). Link. A critical perspective on the EU’s policy on AI discussing the challenge of balancing human rights and innovation, as well as future directions for the EU.
‘How I am fighting bias in algorithms’, TEDx Talk (video) by Joy Buolamwini, 2018. Link. A talk by an expert in Computer Science and AI discussing their experience with AI bias and the fight against it.