Report

Apr 29, 2020

Towards European Anticipatory Governance for Artificial Intelligence

Artificial Intelligence

This report presents the findings of the Interdisciplinary Research Group “Responsibility: Machine Learning and Artificial Intelligence” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Technology and Global Affairs research area of DGAP. In September 2019, they brought leading experts from research and academia together with policy makers and representatives of standardization authorities and technology organizations to set framework conditions for a European anticipatory governance regime for artificial intelligence (AI). 

ABOUT THE WORKSHOP

The workshop “Towards European Anticipatory Governance for Artificial Intelligence” aimed to set framework conditions for a European anticipatory governance regime of artificial intelligence (AI) by exploring which regulatory instrument could deliver beneficial AI for society, as well as when and in which stakeholder constellation it could be implemented in order to safeguard fundamental rights, boost responsible behavior, and prevent malicious use.

Based on the fact that technology interacts with society in many ways – desirable and undesirable, predictable and unforeseen – the workshop sought to negotiate both the opportunities and limits of AI’s application within a societal, interdisciplinary setting, thereby ensuring that the debate was not distracted by alarmist or euphoric narratives, but grounded in evidence. Its ambition was to demystify the mainstream AI discourse, recast the AI challenge beyond the dominant narratives, and point to a number of overlooked policy options that would reinforce and consolidate Europe’s capacity to act in the future, particularly against the backdrop of geopolitical shifts currently being triggered by AI-based technologies.

12 KEY PROPOSALS

These 12 proposals can promote AI development in Europe and help both industry and citizens to reap its benefits. In the text that follows you will also find links to three video clips that each feature a range of experts from diverse backgrounds responding to a key question related to the application of AI.

Image
Preview of YouTube Videos

Scroll down to watch the video interviews

 

1. Recast the challenge by building a policy framework for AI innovation

If Europe is to unlock the value of AI for its societies, we need to depart from a narrative that mystifies AI as the major disruption yet to come. Technology is not a natural phenomenon that imposes structural constraints on decision-making. Rather, it’s the other way around: technology is developed and used by human beings and, thus, provides room for action. Policies, laws, and regulation often seem to lag behind innovation because technologies emerge and advance in labs and start-ups, not in bureaucracies. However, emerging AI technologies and their multiple applications are always developed and implemented within a political, organizational, and cultural context and are invariably shaped by them. The fact that AI-based technologies are embedded in societies offers a chance for early intervention in AI value chains with regulatory sticks and carrots.

2. Defend the European way of life instead of following US or Chinese paths

The “European way of life” – democracy, freedom, rule of law – is not necessarily a unique selling point for Europe in the rest of the world. That said, European legal interventions such as the General Data Protection Regulation (GDPR) are, due to public demand, increasingly influencing regulatory approaches in other areas of the world, including the US. Against this backdrop, a strong European brand of AI needs to be based on high quality standards, compliance with existing legal provisions on fundamental rights and non-discrimination, and, not least, on excellent pioneering research. It is the combination of the above that could make a positive difference for an EU label for developing and using AI that would go beyond being an uninspired copy of the US or the Chinese methods. While the EU ought to start out modest given its current position in the overheated global AI game, it should become bolder and more assertive in its level of ambition to create appropriate technologies for the European way of life and beyond.

3. Unlock the potential of ethics assessments and the EU’s Responsible Research and Innovation model

Public debate is already saturated with calls for “ethical AI.” Yet any claim to ethics is currently rather abstract and would need to become operationalized to really mean something. In this regard, focus should be placed not only on algorithms, but also on the data upon which AI-based technology is developed and the sociopolitical context in which it is applied. This process is only starting. Also, existing ethical guidelines have, so far, been presented or largely influenced by the industry and business sector without sufficient inclusion of experts in (applied) ethics and voices from civil society and the research sector. Broad stakeholder engagement is one of the core prerequisites for Responsible Research and Innovation (RRI). Practicing responsible research, multi-stakeholder engagement, mutual responsiveness, and reciprocal commitment is key to enabling the delivery of inclusive, accountable, and acceptable innovation, which is beneficial to many. Only if those conditions materialize in AI that is developed in Europe may we speak about an ethical and human-centric European brand of AI.

4. Foster trust in institutions and define responsibilities

Who is responsible – and thus liable – for developing AI and AI-based products that are ethical? AI leads to a particular diffusion of responsibility among human and non-human agents, as well as along processes, which makes it increasingly difficult to attribute moral and legal responsibility to certain private or public actors.

Another key question: Can a technology per se be trustworthy or not? The current discussion of this issue obscures the fact that trustworthiness of technology needs to be defined by technical norms, standards, and certificates (see point five below), which delineate a zone of acceptable performance. First and foremost, citizens place their trust in public institutions, such as authorities and governments, which can guarantee their societal welfare and the security of AI-based technologies. Secondly, they place their trust in businesses that provide innovative products and services to the market. Both public institutions and businesses are, however, comprised of people, making them inherently fallible.

5. Streamline the adoption of technical norms, standards, and certification

As ongoing efforts to create norms for autonomous vehicles increasingly show, standardization can be an effective “soft” regulatory tool, accompanying development of emerging technologies for which legislation cannot yet grasp all aspects of their potential use. Currently, there are dedicated efforts at a national, European, and international level to define common terminologies in AI; examine technical specifications in all subdomains of AI-related technologies; assess risk areas of acceptance; and integrate legal, societal, and ethical aspects into standards. A major advantage of the standard-setting process is that it is driven and controlled by the research, development, and innovation (RDI) community and therefore has feedback loops that make it adaptive to changes in technology. There is increasing support for the introduction of AI quality certification for products entering the markets as a guarantee for safety.

6. Integrate foresight, technology assessment, and democratic oversight into policy-making

As technological developments associated with AI have different degrees of maturity, and applications in the market are rapidly evolving, the impacts of their present and future applications are not fully clear. This calls for strengthening forward-looking analyses, including those of institutional, organizational, and cultural/value issues. In the context of RRI and technology assessment, efforts at the parliamentary and international level to anticipate new and converging technologies and their potential disruptive effects – both desirable and undesirable – should help to address societal and geopolitical aspects. Such activities need to inform political decision-making and democratic oversight in a systematic manner.

7. Strike a conscious balance between innovation and precautionary principles

There may often appear to be irreconcilable tension between innovation and precautionary principles. Innovation does not, however, have to be restricted by unnecessary bans. The precautionary principle – enshrined in EU treaties since 2005 – prescribes proactive caution when it comes to risks to the consumer/citizen that cannot be prevented or mitigated with available solutions. Precaution is not about bans, but rather about establishing “traffic rules,” and even imposing moratoriums if more time is needed to cope with the risks. This approach is particularly important when it comes to dual-use technologies with civil and military applications that raise serious concerns of accidental or intentional misuse by malevolent parties.

8. Boost capacity to act strategically at the national and European level

Action at the EU level is often too slow and too cautious, which can – more often than not – be attributed to the reluctance of member states to proceed jointly and decisively in one direction. The stakes involved in AI research, development, and innovation processes are high and include the welfare and protection of individual citizens, industrial competitiveness, the protection of critical infrastructure and national security, and the European capacity to act in an interconnected world. Critical mass and scaling potential can only be achieved jointly at a European level. Adopting a capability-driven approach to AI could facilitate the transformation of novelties into genuine and sustainable innovations. It will be necessary to mobilize relevant EU industries to exploit synergies, avoid unnecessary duplication, and scale-up European efforts. Furthermore, in the manner of RRI and Open Science, a comprehensive EU governance approach ought to establish a permanent dialogue platform that engages all stakeholders throughout the AI value chain. Doing so could end the current disconnect that exists between AI actors at the development end and stakeholders at the user end, as well as among regulators and authorities.

9. Disrupt AI by rethinking desirable and undesirable consequences from a policy viewpoint

AI-based technologies have triggered several debates both in expert and lay circles about multiple upcoming “disruptions.” Only cautious monitoring can reveal which of them will be mere hype and which will bring real, expected benefits, not to mention their costs and unintended effects. It is the task of policy makers to make informed decisions about desirable objectives and intervene with laws, standards-setting, or other means to achieve them. Impacts related to ecology, welfare, fundamental rights, and socio-economic equality are to be considered, in addition to arguments about technological competitiveness and economic profit. When it goes unharnessed, disruption through technology may lead to political turbulence and societal unrest.

10. Regulate AI-based technologies in a smart and sustainable way

A European brand of AI aims to sustain and enhance the “European way of life” through AI-based technologies as well as by working toward welfare, fairness, and societal resilience. Firstly, we need to look at where there is already regulation that applies to AI-based technologies and, secondly, decide what kind of new regulation makes sense. At the same time, other highly salient RDI domains, such as the medical and pharmaceutical sectors, can teach us how legal frames of self-regulation could ultimately be a possibility for enforcing codes of conduct. In order to stay competitive, it can be viable to not regulate technology per se, but to define how we want AI-based technology to be used and what kind of application we will not tolerate. Technology-neutral regulation makes it possible to contextualize further developments in established social use.

11. Invest in enablers of AI innovation, such as digital literacy

Governance measures should address and boost AI uptake and diffusion in businesses, public authorities, and research, while simultaneously enabling representatives of these sectors and citizens in general to take informed decisions and action. If citizens are not given training to improve their skills beyond a basic understanding of the logic of algorithms and the role of data, no diffusion of innovative technological solutions will take place – and also no critical oversight. At the same time, we need philosophers, ethicists, and social scientists be trained in the specifics of AI in order to realize the potential of a European brand of AI.

12. Promote European champions

This will demand joining forces at the EU level instead of pursuing separate national strategies. Moreover, in the new Multiannual Financial Framework, European governments need to “put the money where their mouths are,” following the prominent role given to digital technologies by the European Commission. Instead of following a backward-looking distribution model, the R&D, digitalization, and competition dossiers need to be strengthened in view of the challenges facing Europe in the wider shifting geopolitical context. This implies that relative and not absolute performance is what counts regarding the dynamics in China, India, the US, Russia, and elsewhere. Close European coordination on policies on competition, innovation, trade, and fundamental rights is key to delivering an effective, coherent, and sustainable instrument for mobilizing the untapped potential of AI for Europe. A crucial enabler for scaling up B2B or B2C AI-supported solutions is infrastructure that allows connectivity and interoperability. Innovation should be based on purpose- and rule-driven data sharing, while safeguarding fundamental rights as inscribed in the respective European Charter.

 

DIMENSIONS OF AI GOVERNANCE

by Dr. Isabella Hermann & Georgios Kolliarakis

The development of AI-based technology, its possible uses, and its disruptive potential for society, business, and policy are intimately interconnected. In this respect, governance of AI systems, as in most cases of emerging technologies, resembles a moving target. This poses a threefold policy challenge: first in terms of the high levels of uncertainty in assessing future AI applications as beneficial or malevolent; second, in terms of value and interest conflicts among involved actors, but also societies and states; and third, in terms of a high degree of complexity due to the involvement of several policy fields beyond technology R&D and industrial policies, such as those related to consumer protection, competition, labor, defense, and foreign affairs. A whole array of policy instruments is available including those that are self-regulatory, such as codes of conduct (CoCs); those that are “soft”, such as RDI investment, standardization, and certification; and those that are “hard” and binding, such as legislation and international agreements.

Background

Hopes and concerns for the application of technologies using artificial intelligence (AI) have been fueling public debate for quite some time. The major lines of discussion run between optimistic innovation narratives about AI’s benefits for society and precautions to prevent potential negative effects, both unintended and anticipated. These could include risks for fundamental rights and security, lack of provisions for responsibility and accountability, lack of law enforcement, and regulations unfit to handle the accelerating pace of research and development (R&D) as well as AI’s multiple “dual-use” applications.

As in most cases of emerging technologies, certainty and consensus on how to reach desirable goals with AI are low, whereas the complexity of governance and overall stakes around it are high. The European Anticipatory Governance we propose consists of three dimensions: European, Anticipatory, and Governance. Firstly, we use the term governance because it points to the array of instruments and multitude of players involved in shaping the nexus of policy, society, and technology. Concretely, to address governance, discussions must go beyond a spectrum defined by the continuum of laws on the one hand and industrial self-constraints in the form of ethical checks on the other; they must be broadened to include additional tools that have already been established to shape the present and future of technology in, with, and for societies. Secondly, anticipation points to the fact that there is no single, deterministic future lying ahead of us, but – depending on our choices – many contingent ones. It is, therefore, necessary to assess possible, probable, and desirable effects of technology in society. In this way, we can create awareness and, ideally, shared visions in order to mobilize resources and elaborate paths to a beneficial future for society. Hence, thirdly, we should use the power of the European Union and other European countries to create a strategic, material, and moral advantage at an international level based on European values. In doing so, Europe can harvest the benefits and avoid the risks of AI applications for its people. Providing technological alternatives to deliver on the EU’s promise of a common good is, thus, not merely a task for technology and industrial policy, but also civil society as a whole.

Our ambition was to showcase that, while technology creates constraints for society and policy, the opposite is also true: societal choices and policies can create constraining and/or enabling conditions for technologies in order to reach the goals that humans set. Based on this premise, we asked two main questions: What is our vision for a specifically European research, innovation, and application of AI-based technology? And what mix of legislation, standardization, certification, and self-regulatory approaches is needed to best allow AI to deliver on societal benefits while preventing undesirable side effects? We tackled these questions along four policy dimensions:

  • Codes of conduct, ethics, and moratoriums in R&D, which are all components of self-regulating, non-binding constraints
  • Research and innovation policies, norms, standardization, and certification
  • National, European, and international legislation, treaties, and agreements
  • Europe’s capacity to act in AI innovation policies, focusing upon barriers and windows of opportunity

Codes of conduct and ethics

Although ethics have already been identified as an indispensable component of AI R&D and application, many questions still remain with regard to the “translation” of ethical norms into algorithms. RRI could be an effective and sustainable model for building transparency, accountability, inclusiveness, and precaution into research and innovation. The role of self-imposed CoCs should be underlined and strengthened as laid out in the Ethics Guidelines for Trustworthy AI published by the European Commission’s High Level Expert Group on AI (AI HLEG) in April 2019.

EU treaties and charters should guide a value- and human-centric approach to AI

If there are inherent European ethics based on European heritage, the European treaties, and the European Charter of Fundamental Rights, then they should also provide the framework conditions for new technologies based on AI. A European approach to AI needs to be human-centered, meaning that it ensures good collaboration between humans and machines. Its focus must be on what is best for the human being, not on what constitutes the most efficient and optimized process. Consequently, when we discuss the regulation of AI applications, we must also consider drawing red lines. This should not be perceived as constraining innovation, but rather as enabling it to deliver its intended benefits and preventing it from causing harm to humans. Cases in point could be, for example, AI deployment on the military battleground or specific applications such as facial recognition or emotion recognition. Moratoriums in these fields are also thinkable until we have proper mechanisms to cope with risks – as decades of experience in research in medicine, genetic biology, and pharmaceuticals has shown.

Merge the ethics discussion with RRI

As a starting point, the ethics discussion on AI should be merged with other normative approaches to technological development. One such approach is the EU’s Responsible Research and Innovation (RRI) initiative “that anticipates and assesses potential implications and societal expectations with regard to research and innovation, with the aim to foster the design of inclusive and sustainable research and innovation.” Based on the methodology of technology assessment (TA), RRI is implemented under the EU Research and Innovation Program “Horizon 2020” as a cross-cutting priority for all research actions extending over seven years (2014 to 2020). The point of departure of RRI is that – even though there are general principles that technological development must follow, e.g. human rights – they are interpreted differently in certain situations depending on the social context. Technology is always context-dependent because socio-technical chains and social contexts vary. RRI focuses on social challenges, involvement of stakeholders, risk avoidance through anticipation, inclusiveness, and responsibility.

RRI addresses many of the challenges in the current discussion around AI, namely the disconnect between the AI ethics discussion and the actual development and application of AI-based technology. The EU’s Ethics Guidelines for Trustworthy AI try to overcome this discrepancy, but much could still be gained from RRI, especially as there are already tangible benchmarks and lessons learned, and the approach is very well understood and used outside of Europe. Currently, RRI is being integrated into the even broader term Open Science. A RRI/Open Science approach to AI development could pose a practical attempt to work together internationally. In addition, it could contribute to the demystification of AI by helping it to be seen as what it is: a technology enabler acting as glue between the two socio-technical trends of big data and digitalization. In this respect, an anticipatory approach – instead of retrospective interventions – is key. 

Formulate and consolidate codes of conduct

Nevertheless, if we are to implement CoCs in the spirit of RRI and Open Science, the question arises how binding they can be. Let us take a step back here and consider two crucial aspects in the discussion. On the one hand, there is an imbalance in the industry’s involvement in the current drafting of numerous ethics guidelines; on the other hand, AI-based technologies lead to an atomization of human responsibility.

Firstly, industrial actors play a prominent role in defining ethical principles for the development and application of AI – be it internally, in cooperation with other companies, in political committees, or even in the academic environment. Microsoft and Facebook can be named as prominent examples. Also, industry and business have played a role in the Ethics Guidelines for Trustworthy AI themselves since industry actors are well represented in the AI HLEG. Generally, the industry counts on self-regulation, since the more binding certain rules are, the more they are regarded as inhibiting innovation and business opportunities. It is noteworthy that, in the process of defining ethical guidelines and rules, the experts in this field – philosophers and ethicists – seem to be underrepresented. If we are really convinced that Europe has an intellectual leadership role in the definition and implementation of ethical AI, we also need to train a new generation of experts in the field of (applied) ethics. Importantly, the discussion about trustworthiness of technology conceals one important element, namely that citizens build trust towards public institutions and business made up of people that guarantee the societal welfare and security of AI-based technologies – not towards the technology per se.

Secondly, AI leads to a diffusion of responsibility among human and non-human agents and along processes, which makes it increasingly difficult to attribute moral and legal responsibility to certain (personal) actors. One of the basic questions to address is who is responsible – and thus liable – that AI development and products are ethical. Is it the company developing the application, the coder and data scientist, the business or state agency offering it as a service, the person using it for assistance? Even though the problem of distributed responsibilities in technological development has always existed, we are now confronted with a new severity to the issue. Technology using AI is not only applied in single situations, but it also creates a whole techno-social system that will affect our societies in fundamental ways.

Norms, Standardization, and Certification in Research and Innovation

R&D spending, fostering innovation ecosystems, standardization, and certification of AI are “soft” yet important governance instruments. Being competitive in the global AI field while adhering with European fundamental rights and values is one challenge here. Another challenge is the interplay – and often unnecessary fragmentation – of policies at the national level versus Europeanization of efforts.

Strategically structure AI funding in R&D

Europe has great researchers at the university level and excellence centers in the AI field. To better leverage the opportunity they present, however, these research institutions and innovation hubs should be brought together in the European context to generate and leverage synergies in order to create competitive critical mass. This is especially true because the US technology companies Google, Amazon, Facebook, Apple, and Microsoft – known collectively as “GAFAM” – are transforming themselves more and more into “AI companies.” To be competitive, the EU must understand that investment in physical infrastructure, such as highways, needs to be matched in the middle-term by investment in intangible assets, such as R&D, training skills, and education.

In order to achieve more European competitiveness and the ability to innovate, it is not useful for Europe to “fight already lost battles.” It makes little sense to compete in fields or develop tools and platforms where it is almost impossible to gain ground vis-à-vis the United States and China – for example, by creating a European search engine. Instead, the European Union and its member states should find a niche in the AI industry with the aim of creating “the next big thing” with market potential and in line with European values. In that context, Europe should merge its strengths in its industrial base and further combine these advantages with AI technologies. Even though there seems to be the perception that only large countries and organizations can attain innovation breakthroughs in AI, the field also holds a great deal of potential for small players.

Move toward international norms and technical standards for AI

Norms and standards are bound to play a key role in the process of creating governance for AI. At the national and international level, a certain division of labor currently exists: political decision-makers define the specific requirements for AI, but standardization bodies define concrete technical standards. In order to fully develop AI standardization, common concepts, operationalization, and vocabulary are needed internationally. The Joint Technical Committee (JTC 1) of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), a consensus-based and voluntary international standards group, is currently developing an international standard on AI terminology. The first draft of this standard is expected to be available as early as 2020.

In this process, there must also be a balanced approach between innovation and standardization with market-oriented thinking. If standardization comes too early, it can block innovation. We should bear in mind, though, that there are different types of standards that are used at various times in the development process. Standards in terminology, for example, can be used at the outset to promote innovation by improving interoperability, while standards for the implementation of technology applications can only be introduced during the use phase.

The integration of EU values into AI systems could make a positive difference in the market, providing a good selling point for the EU worldwide. Based on the high quality of European R&D and international standardization efforts, a European branded AI could be created, opening a window of opportunity not only to catch up, but also to compete with the United States and China.

Optimize the nexus of public-private-partnerships

Collaboration between public and private entities in AI development is becoming increasingly problematic because the former do not have the same level of access to knowledge as the latter. This can lead to public organizations not always being able to judge what they are buying into. Therefore, increasing the knowledge – and, thus, independence – of the state and general public vis-à-vis AI-based technologies is an investment opportunity. Furthermore, the relationship between consumers and industry needs to be based upon a tangible benefit from technologies. Therefore, European R&D has to develop AI systems, which are simultaneously user-friendly and conform to standards.

Many of the AI projects currently being led by the EU have a pilot and preparatory character trying to involve multiple stakeholders. Hence, it seems that the EU is taking a cautious approach before launching large-scale initiatives. The EU still has among the biggest GDPs worldwide. It could leverage its huge market size to create multi-stakeholder innovation ecosystems that are inclusive. To increase the strategic autonomy of the EU in AI, its initiatives should be bold, investing in strong innovation hubs and engaging with public entities and civil society organizations from early on. Here, an open question is whether existing EU initiatives for developing research and innovation frameworks – such as the upcoming Horizon Europe, 2021–2027 – take sufficient account of the importance of AI for society as a whole and not merely for the business and R&D communities.

National, European, and International Legislation, Treaties, and Agreements

Complementing self-regulatory and “soft” regulatory instruments in shaping the framework conditions for beneficial AI-based technology, is a third dimension: legislation or “hard” regulation. The principal question is how smart, adaptive regulation can achieve positive results in a broad, emerging technological domain without stifling innovation.

Balance new law-making with better implementation of existing laws

AI is bound to affect policy fields such as competition (anti-trust), industry and trade (including dual-use), justice (including GDPR), consumer protection, defense, and foreign affairs. While there is a need for new pieces of legislation in some areas – for example in the rapidly evolving field of Lethal Autonomous Weapons Systems (LAWS) – we need to carefully examine when and where on a case by case basis. It might be more sensible to regulate specific applications of AI rather than regulate AI as such. Although AI-based technologies are developing to such an extent that existing regulation cannot accommodate all use cases, these cases should be a starting point for assessing which applications and corresponding standards can stand before writing new ones. This position was confirmed by research undertaken in Estonia and other northern EU countries that states that there is no need for new regulations specifically addressing AI – also to avoid the danger of overregulation. This might be especially true in order to ensure the competitiveness of smaller European start-ups or enterprises because large (US-based) companies could more easily comply with new and/or additional laws.

A key question to guide new policy initiatives is whether we are dealing with a new problem, or whether the problem can be tackled with existing regulation. In the context of the “regulatory fitness” debates of the past couple of years, we should be careful not to unnecessarily add new laws that complicate implementation of initiatives and are expensive to enforce. A contested point is whether we need specialized, tailor-made pieces of legislation for specific AI applications, or if this would lead to an explosion in regulation. The opposing view maintained that the smart way to regulate is to reduce the number of laws, and instead become more efficient in defining framework conditions that catch up with several key aspects of AI applications. The European Commission has, for example, been pursuing that approach for several years – not least on the grounds of facilitating entrepreneurial activity in a cross-border way within the EU.

There is often uncertainty on the part of the AI R&D community about the legal framework that applies to its activities, including the risk of being penalized for developing or adopting AI applications. Existing legislation was, of course, mainly written prior to current progress in AI-based technologies. Consequently, we have to check its suitability to take into account today’s very different facets of AI application. Currently, only the GDPR provides some norms, which more or less specifically address AI. Yet, before resorting to the option of all-encompassing AI regulation, we need to examine whether combining the EU Charter of Fundamental Rights with corresponding case law (including such aspects as the rights of the child and freedom of expression) would be sufficient to deal with – to name just one prominent example – machine-learning-driven facial recognition technologies. Given the fact that case law is notoriously slow to offer guidance to stakeholders, especially in the field of liability, some new legislation might be warranted.

Regulate technology application, not (only) technology development

A complementary ongoing discourse concerns the question of how technology can be well regulated against rapidly advancing technological development. One option is not to regulate technology per se, but to define not only how we want AI-based technology to be used in line with our values, but also what kind of application we are not prepared to tolerate. Technology-neutral regulation makes it pos-sible to contextualize further developments in established social use. At this point, it is important to note that there are already laws in place that provide for a certain desired social outcome, such as the European antidiscrimination law and its national implementations. However, in order to enact new laws or enforce existing ones, ethical, legal, social, and technical expertise regarding the development and application of AI by civil servants is required.

In the realm of emerging and converging technologies, we naturally cannot foresee how they will affect society. Precedence comes from the regulation of nuclear technology for civil and military use, which has led to decades of difficult international negotiations, some of which are still ongoing, about restrictions of development, test bans, and conditional definitions of use. Given the probabilistic nature of AI, procedures can be designed to test legal framework conditions for certain fields of application, instead of the technology itself. This might lead to better predictability of outcomes, and, in the medium term, provide the right insurances to citizens. In this respect, the ongoing efforts in the EU to update the dual-use export control regulation of sensitive goods, services and, crucially, intangible knowledge transfer – including AI-related chapters – is a case in point.

Establish harmonized and coherent red lines

The challenge for policy intervention is to provide framework conditions in a twofold manner. On the one hand, European governments should animate innovation and entrepreneurial activities in a joint and concerted effort (as also mentioned above). On the other hand, we need to define zones of intolerance. The Precautionary Principle has been enshrined in the EU Treaties since 2005 and should be applied to AI R&D in contexts of technology assessment. One possible avenue would be to shape future consumer protection law along the regulation of increasingly salient human-machine interactions. In this context, the issue of “distributed agency, which hampers attributability and causes diffusion of responsibility” needs to be addressed, as well as that of the diffusion of dangerous products or services.

A disquieting development to be taken into account is that companies flee from the ambit of EU law and move to China or the US to develop AI applications while subsequently selling the products in the EU market. Therefore, in order to promote trust in the regulatory competence of the EU, a stronger, more effective and more efficient EU-wide mechanism needs to be devised to eliminate duplicities and inconsistencies in consumer, competition, trade, and human rights law. More coherence and harmonization would better serve the interests of both industry and citizens.

Fostering the European Capacity to Act in AI through Governance

How can governance facilitate the European capacity to act on AI in the current, shifting geopolitical and economic global order?

Strike a balance between a “European way of life” and a viable Europe in the world

When considering how to build Europe’s capacity to be competitive internationally, it is important to take the wider context in which AI-related policies are developed into account. Potentially, these considerations may result in a tradeoff between competitiveness and policy-making driven by EU-values, which are based on democracy, non-discrimination, the right to data security, and transparency. It is a matter of debate whether such a tradeoff is unavoidable. Could a European brand of AI, instead, also be marketed as a uniquely innovative EU service? “Trustworthiness” in European AI – meaning that AI applications are secured by institutions and standards – was identified as one potential unique selling point. The EU could aim to be leading in this field and promote debates on responsible and accountable AI research, innovation, and application by leading international and multilateral conferences on the topic. The EU should not confine itself to working out a regulatory framework for AI, which could potentially be seen as one-sided and stifling. Nevertheless, it is important to also identify incentives for growing AI solutions in particular areas. Therefore, EU initiatives should also strongly focus on the positive outcomes of AI and the services it can provide for individual citizens and communities.

Think in ecosystem terms and engage key stakeholders along the value chain

Competitiveness on a European level can only be achieved jointly within a European ecosystem that provides the infrastructure for AI development and application. This implies integrating national digitalization and AI strategies into a single European one. It will be necessary to activate all parts of the industrial-technological base of the EU in order to exploit synergies, avoid costly duplication, and rein-force European efforts. In addition, a comprehensive EU governance approach should create a permanent dialogue platform involving all stakeholders along the entire AI value chain to improve the current separation of actors. This includes bringing together AI developers, stakeholders on the user side, regulatory authorities, businesses, and suppliers, as well as citizens and civil society actors.

Enable European infrastructures and operations to build AI champions

A key challenge for the future will be to achieve better data sharing between EU countries – a process, which is currently limited. The amount and quality of data available within the EU is a valuable resource for developing AI tools, but it is currently not fully exploited. The reason why such data sharing is not easy to implement is partly that countries take different views on how many national restrictions are necessary and/or possible without losing competitiveness in the international market. A prerequisite for data sharing, therefore, is to establish rules for sharing and interoperability of infrastructures. Further to this point, establishing co-funded and co-operated EU-wide infrastructures – from fundamental research to mission-driven applied research – is a must for enabling framework conditions that help ideas to enter the market, public administration, and society. Not least, in order to reap the benefits from AI, “tech nationalism” needs to give way to a unified European approach. This is key not only for building up critical mass, but also for being able to scale up the efforts to a bigger market. European AI champions, which can compete with US and Chinese initiatives, need pan-European infrastructures and resources to tap into.

Bibliographic data

Kolliarakis, Georgios, and Isabella Hermann. “Towards European Anticipatory Governance for Artificial Intelligence.” April 2020.

DGAP Report No. 9, April 29, 2020, 16 pp.

DGAP Report No. 9, April 29, 2020, 60 pp. (extended version with think pieces by workshop participants)

Program

Themen & Regionen