YSEC Volume IV (2023)

Law and the Governance of Artificial Intelligence

Contents

YSEC_2023

Artificial intelligence (AI) has the potential to radically transform our society. It may lead to a massive increase in the capabilities of humankind and allow us to address some of our most intractable challenges. It may also entail profound disruption to structures and processes that have sustained our society over centuries.

These developments present a unique challenge to the socio-economic constitutional arrangements which govern our world at national, regional and international level. The deployment of increasingly powerful AI systems, able to function with increasing degree of autonomy, has led to concerns over loss of human control of important societal processes, over the disruption of existing economic, social and legal relationships, and over the empowerment of some societal actors at the expense of others, together with the entrenchment of situations of domination or discrimination. It has also made increasingly clear how tremendous the potential benefits that these technologies may bring, are to those who successfully develop and deploy them.

There is therefore great pressure on governments, international institutions, public authorities, civil society organisations, industry bodies and individual firms to introduce or adapt mechanisms and structures that will avoid the potentially negative outcomes of AI and achieve the positive ones. These mechanisms and structures, which have been given the umbrella term ‘AI governance’, cover a wide range of approaches, from individual firms introducing ethical principles which they volunteer to abide by, to the European Union proposing to legislate an AI Act, which will prohibit certain types of AI applications and impose binding obligations on AI developers and users. The fast pace of innovation in the development of AI technologies is mirrored by the fast pace of development of the emerging field of AI governance, where traditional legislation by public bodies is complemented with more innovative approaches, such as hybrid and adaptive governance, ethical alignment, governance by design and the creation of regulatory sandboxes.

There is an urgent need to understand the implications of these developments for our Socio-Economic Constitutions, and to that end, YSEC invites scholars to contribute original submissions on legal aspects of AI governance.

150 ysec colour no backgr

Chapters

The 2023 Yearbook of Socio-Economic Constitutions, dedicated to ‘Law and the Governance of Artificial Intelligence’, explores the timely and complex issues surrounding the emergence of artificial intelligence (AI) technologies. The volume delves into the transformative potential of AI in various sectors such as law enforcement, healthcare, and recruitment, while also highlighting the growing global concern regarding the risks posed by AI. The contributors examine the evolving landscape of AI governance, discussing the challenges of regulating AI, the role of law as a governance tool, and the disruptive effects of AI on existing governance regimes. The volume scrutinizes the proposed EU AI Act and its implications, as well as the need to reconceptualize fundamental legal principles in response to AI advancements. Furthermore, it addresses the concentration of power among private corporations in AI development and the associated risks to individual rights and the rule of law. Ultimately, the volume underscores the crucial role of law in navigating the complexities of AI governance and calls for a nuanced understanding of the implications of AI for society.

Eduardo Gill-Pedro and Andreas Moberg (2024)

Read the full chapter on Springer.

The European Union (EU) has been actively addressing the regulation of emerging technologies like artificial intelligence (AI), recognizing their potential to tackle societal challenges related to climate change, healthcare, education, and transportation. Over the past few years, there have been numerous proposals for laws dealing with the challenges posed by AI and new digital technologies.

The focus now is on ensuring that these proposals are not considered in isolation, but rather, the legislation needs to align and complement each other. This is crucial because AI systems may fall under multiple legal regimes, such as the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Ensuring consistency in terminology and rules is vital to prevent legal contradictions and uncertainties.

This analysis aims to examine the existing and proposed AI regulations and identify how they interact with each other. The goal is to prevent legal uncertainty, fragmentation, and regulatory gaps, in line with the EU's 'Better Regulation' system, which seeks to ensure high-quality, accessible, and transparent legislation.

The EU's ambition is to establish an 'ecosystem of excellence' and an 'ecosystem of trust' for AI, addressing not only technological aspects but also the quality of governing laws. Effective governance is essential to mitigate risks related to fundamental rights, including data protection, privacy, and non-discrimination. Safety and liability regimes also need to function effectively. Many AI systems are opaque, leading to information imbalances between developers and other stakeholders.

Ultimately, the EU aims to implement trustworthy AI and prevent legal fragmentation across member states to provide legal certainty for developers, public authorities, companies, consumers, and all stakeholders. The contribution in question delves into various legislative frameworks, highlighting challenges, the significance of the AI Act, and the importance of these frameworks working cohesively to ensure effective regulation and benefit from emerging technologies while safeguarding rights and safety.

Béatrice Schutte (2023)

Read the full chapter on Springer.

The European Union (EU) has been actively addressing the regulation of emerging technologies like artificial intelligence (AI), recognizing their potential to tackle societal challenges related to climate change, healthcare, education, and transportation. Over the past few years, there have been numerous proposals for laws dealing with the challenges posed by AI and new digital technologies.

The focus now is on ensuring that these proposals are not considered in isolation, but rather, the legislation needs to align and complement each other. This is crucial because AI systems may fall under multiple legal regimes, such as the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Ensuring consistency in terminology and rules is vital to prevent legal contradictions and uncertainties.

This analysis aims to examine the existing and proposed AI regulations and identify how they interact with each other. The goal is to prevent legal uncertainty, fragmentation, and regulatory gaps, in line with the EU's 'Better Regulation' system, which seeks to ensure high-quality, accessible, and transparent legislation.

The EU's ambition is to establish an 'ecosystem of excellence' and an 'ecosystem of trust' for AI, addressing not only technological aspects but also the quality of governing laws. Effective governance is essential to mitigate risks related to fundamental rights, including data protection, privacy, and non-discrimination. Safety and liability regimes also need to function effectively. Many AI systems are opaque, leading to information imbalances between developers and other stakeholders.

Ultimately, the EU aims to implement trustworthy AI and prevent legal fragmentation across member states to provide legal certainty for developers, public authorities, companies, consumers, and all stakeholders. The contribution in question delves into various legislative frameworks, highlighting challenges, the significance of the AI Act, and the importance of these frameworks working cohesively to ensure effective regulation and benefit from emerging technologies while safeguarding rights and safety.

Liane Colonna (2023)

Read the full chapter on Springer.

This chapter analyses the dynamics of AI governance and advances a few ideas that should help us ensure an AI compliant with human dignity and freedom, human rights, democracy, and the rule of law. Human beings are shedding their ability to govern and exercise regulatory authority over their own affairs, ceding power to nonhumans, but as much as this trend may be owed to the advance of AI technology, also complicit in it is the regulatory activity of the companies that develop these technologies. On top of that, there is the question of AI in the hands of governments, whose use of the technology for their own (legal) regulation gives them over citizens a power that can be misused or abused. How, then, to confront this danger that AI and its use are posing? This chapter argues that one tool we can use to protect ourselves is that of human rights, but to that end—if we are to make these rights effective—we have to bring them up to date in light of the advancements made in AI. Due to the very nature of AI, the focus is on privacy and data protection, where I identify three cases—namely, group privacy, biometric psychography, and neurorights—and with each of these cases I show that AI can be governed by introducing a suite of rights designed to be AI-responsive or otherwise by reinterpreting existing rights so as to make them so.

Migle Laukyte (2023).

Read the full chapter on Springer.

The indirect horizontal effect of human and fundamental rights has dominated European constitutional practice. In recent years, fractures have appeared in its face in courts and in regulatory practice. The EU has introduced multiple legislative initiatives that have pushed the rights towards apparent direct horizontal effect.

This article analyses the AI Act and the Corporate Sustainability Due Diligence Directive as examples of a novel human and fundamental rights strategy. The article argues that the instruments first weaken the rights and then deploy them to normatively guide and condition intra-firm sense-plan-act cycles. The rights first create adverse human rights impacts and fundamental rights risks to serve as objects of concern in corporate information processing. The planning and acting stages transport the rights into real-world reduction in human and fundamental rights violations. While on its face weak, the novel strategy is likely an adaptation to political pressures, but contains the seeds of a possible progressive end-game.

Mika Viljanen (2023)

Read the full chapter on Springer.

The rule of law is an elusive concept, and its fluidity lends itself to multiple interpretations. Different accounts connect various core elements - ‘desiderata’ - under this universally recognized concept. However, there is a consensus (albeit implicit) that the rule of law is essentially a public law concept, of only marginal concern to private law. This paper departs from this understanding and suggests that this presumption is a misperception. The rule of law does not concern only the regulation of powers and arbitrariness between individuals and the State, but it operates also in the relationships between private individuals.

In particular, with the flare-up in recent years in the use of machine learning algorithms to profile online users (in order to predict their behaviour and tailor recommendations and searches to their preferences), private actors (i.e. online platforms) have obtained a super-dominant position both in the collection of data and development of the technology, in the digital (eco)systems in which they operate in.

This paper aims to prospectively assess the duplicitous relevance that algorithmic profiling has for the protection of fundamental rights from a private law perspective (e.g. right to privacy, to not be discriminated, freedom of expression) and for the self-appointed power of online platforms to self-regulate their contractual relationships with users in the digital markets. Conversely, it also discusses the relevance of the rule of law for private law relationships in its function of stronghold for the protection of fundamental rights. This value, on one side, creates legal guardrails around private self-regulation of online platforms, and on the other side, it secures the respect of fundamental rights of users from algorithmic profiling by online platforms. The paper concludes with the need to re-evaluate the State’s power to limit private freedom and interfere in parties' autonomy in cases where fundamental rights are seriously at stake.

Silvia Carretta (2023)

Read the full chapter on Springer.

The future of artificial intelligence (AI) promises significant changes to human life, from infancy to old age. AI systems, capable of managing computer networks, operating IoT devices, designing virtual realities, and learning autonomously, are set to redefine human interaction with technology. At an individual level, the integration of AI may involve brain-computer interfaces. However, ensuring that AI integration benefits humanity is essential, particularly when AI is used in critical decision-making systems.

AI presents both incredible potential and risks. It challenges the notion of a "thing" that can be used alongside humans or even surpass human capabilities. A "social contract" between individuals and the state must ensure that enforceable rights exist when machines make decisions to uphold the rule of law, democracy, and fundamental rights.

The distinction between AI as a phenomenon and its field of application is vital. The risks associated with AI depend on the purpose of data processing. General-purpose AI software, like ChatGPT-4, can be a valuable tool as a talking encyclopedia but poses risks in critical decision-making contexts.

To address these challenges, the European Union (EU) has introduced the AI Act, focusing on AI systems posing high risks to safety and fundamental values. It requires AI systems to be designed for human oversight during use. This legislation is part of the broader EU framework for market surveillance, conformity assessment, and data protection.

The development and deployment of AI systems must respect fundamental rights and comply with safety and interoperability standards. AI's impact on data privacy and protection, as outlined in the General Data Protection Regulation (GDPR), is a significant concern. The right not to be subject to automated decisions, as stipulated in Article 22 of the GDPR, is particularly relevant. AI's effectiveness should be measured in terms of human well-being rather than purely economic productivity.

Balancing AI's productivity and human-centric development requires a long-term perspective, where limited economic freedom can be a necessary safeguard. Ensuring AI's compliance with the rule of law is essential for it to truly benefit humanity.

Claes Granmar (2023)

Read the full chapter on Springer.

Algorithms are becoming increasingly prevalent in the hiring process, as they are used to source, screen, interview, and select job applicants. This chapter examines the perspective of both organisations and policymakers about algorithmic hiring systems, drawing examples from Japan and the United States. The focus is on discussing the drivers underlying the rising demand for algorithmic hiring systems and four risks associated with their implementation: the privacy of job candidate data; the privacy of current and former employees' workplace data; the potential for algorithmic hiring bias; and concerns surrounding ongoing oversight of algorithmically-assisted decision-making throughout the hiring process. These risks serve as the foundation for developing a risk management framework based on management control principles to facilitate dialogue within organisations to address the governance and management of such risks. The framework also identifies areas policymakers can focus on to help balance i) granting organisations unfettered access to the personal and potentially sensitive data of job applicants and employees to develop hiring algorithms and ii) implementing strict data protection laws that safeguard individuals' rights yet may impede innovation, and emphasises the need to establish an intra-governmental AI oversight and coordination function that tracks, analyses, and reports on adverse algorithmic incidents. The chapter concludes by highlighting seven recommendations to mitigate the risks organisations and policymakers face regarding the development, use, and oversight of algorithmic hiring.

Jason D. Schloetzer and Kyoko Yoshinaga (2023)

Read the full chapter on Springer.

 

This paper engages with a key debate surrounding artificial intelligence in health and medicine, with an emphasis on women’s healthcare. In particular, the paper seeks to capture the lack of gender parity where women’s health is concerned, a consequence of systemic biases and discrimination in both historical and contemporary medical and health data. The existing literature review demonstrates that there is not only a gender data gap in AI technologies and data science fields – but there is also a gender data gap in women’s healthcare that results in algorithmic gender bias, impacting negatively on women’s healthcare experiences, treatment protocols, and finally, rights in health. On this basis, the article seeks to offer a concise exploration of the gender-related aspects of medicine and healthcare, shedding light on the biases encountered by women in the context of AI-driven healthcare. Subsequently, it conducts a doctrinal comparative law examination of the existing legislative landscape to scrutinize whether current supranational AI regulations or legal frameworks explicitly encompass the protection of fundamental rights for female patients in the realm of health AI. The scope of this analysis encompasses the legal framework governing AI-driven technologies within the European Union (EU), the Council of Europe (CoE), and, to a limited extent, the United Kingdom (UK). Lastly, this paper explores the potential utility of data feminism (that draws on intersectionality theory) as an additional tool for advancing gender equity in healthcare. 

Pin Lean Lau (2023)

The Covid-19 pandemic has affected the entire area of health care, including care provided to patients with mental health problems. Due to the stressful nature of the pandemic, the number of patients experiencing mental health problems, especially depression or anxiety, has increased. Even well-before the pandemic, Europe struggled with a lack of mental health care, which was especially caused by the long waiting times. The problem seems to have been solved by the plethora of mental health applications that are freely available on the market. Given the user's accessibility to these applications, I decided to scrutinise the safety of using AI in these health apps, with a particular focus on chatbots. I examined whether existing European legislation may protect users from possible harm to their health and require these mental health applications to be certified as medical devices.

After analysing the Product Liability Directive and the upcoming legislation focused on liability associated with AI, I must state that there is insufficient transparency and protection for users of these applications. Based on experience from the user's perspective, we have identified the lack of i) scheduling an appointment with a healthcare professional, ii) human oversight, and iii) transparency as regards the type of AI used. Due to the 'black box problem', it is likely that the user who was harmed will not be able to get compensation because of the difficulty of proving causality between the defect and the damage.

Petra Müllerová (2023)

Read the full chapter on Springer.

The use of artificial intelligence by public administrations raises major legal challenges, given its potential to harm citizens' rights and freedoms. The risks that artificial intelligence systems may entail must be assessed when AI systems are procured, both in the design of the tender and in the establishment of the obligations of the contract.

In this context, even in the absence of a European legal framework on the use of artificial intelligence systems, a number of soft law documents have already been drafted to ensure that administrations procure trustworthy AI systems. From these, it is possible to extract a series of guidelines that should be incorporated in the public procurement of AI systems, as these clauses derive without difficulty from the general legislation applicable to the use of new technologies by the administration.

Isabel Gallego Córcoles (2023)

Read the full chapter on Springer.

In constitutional theory, the requirement of necessity is an integral part of a wider proportionality assessment in the limitation of constitutional rights. It fulfils a function of sorting out measures that restrict rights more than is required to fulfil the intended purpose. Within data protection, the requirement varies in strictness and interpretation – from ‘ordinary’ necessity to ‘strict necessity’. Recently, the European Court of Justice (ECJ) has introduced what appears to be an even stricter requirement of ‘absolute necessity’ relating to the processing of biometric information under the EU Law Enforcement Directive (LED). In practice however, the implications of those respective levels of strictness tends to vary, from a strict ‘least restrictive means’ test, to an analysis of whether a measure is necessary for a more effective or a more efficient fulfilment of the intended purpose. In this contribution the principle of necessity as applied by the ECJ is analyzed as it pertains to the LED and the Charter, more specifically in the context of implementing AI supported analysis of biometric data. The gradual development of the interpretation of necessity is traced in the data protection case law of the ECJ. The study shows the increased emphasis placed on proportionality over time, highlighting both strengths and potential weaknesses of the requirement in relation to the use of AI supported decision-making in the law enforcement context.

Markus Naarttijärvi (2023)

Read the full chapter on Springer.