The EU paves the way for ethical AI

The word on the streets of Brussels is that the next European Commission will welcome a Commissioner whose portfolio will focus on Artificial Intelligence. This would send a strong signal that the European Union is dedicated to becoming a leader in regulating AI – since it is not likely to assume leadership from a technological or industrial standpoint.

An intense competition is taking place worldwide in terms of investment and research between the main leaders: the USA, the EU and China. The three parties have adopted different approaches to the development of AI: while in the US research and development is driven by industrial/corporate players, in China and to some extent in Europe it is driven by researchers. Currently, the global competition on AI is largely between the USA and China. The USA relies on a strong corporate world. China, on the other hand, is making a strong effort to turn research into patents. It has also put in place a strongly coordinated approach to AI, including government policy, industrial applications and research.

The European Union has been trying to differentiate itself from its competitors by focusing on developing values and ethics in AI. The AI conversation in Brussels and many European capitals still revolves mainly around the promotion of a responsible, trustworthy, human-centric approach to AI and exporting this model by putting human-centric AI on the agenda of global forums. This approach is reflected in the European initiatives on AI.

Following the publication of its Communication on a “Coordinated Plan on Artificial Intelligence” on 7 December, the European Commission presented its three-step approach to AI on 8 April in its new Communication on Building Trust in Human Centric Artificial Intelligence. This approach includes “setting-out the key requirements for trustworthy AI, launching a large-scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI”. The key requirements for trustworthy AI are further detailed in the finalized Ethics Guidelines for Trustworthy Artificial Intelligence, developed by the High-level expert group on Artificial Intelligence (AI HLEG). The Guidelines outline the foundations and requirements that AI systems should implement and meet throughout their entire life cycle as well as assessment tools. Those guidelines have been praised for setting high standards for the development and use of AI.

AI already impacts our everyday life; new applications are developing fast and machine-learning based applications can only learn and improve over time. We are already witnessing controversial use of AI by certain companies or governments around the globe. The EU is setting key principles that should guide the use of AI and ultimately ensure that the solutions that are developed are robust, and respectful of established rights.

However, it is important that moving forward, the EU remembers that AI carries just as much opportunities as it carries risks. The EU also needs to ensure that future regulations do not hamper innovation if we want to lead by example and prove that another way is possible. Fostering innovation is also key to ensure that the EU is not dependent on foreign technologies.

Good regulation should be grounded in communication between the industry and policymakers to ensure that we harness innovation for good.