On 21 April, the European Commission presented a package of initiatives concerning Artificial Intelligence (AI). It consists of a new coordinated plan , updating the one prepared together with the Member States in 2018, and two proposals for regulations. Of these, one introduces the first ever common regulatory framework on AI  while the other introduces new rules on machinery (replacing a previous directive from 2006).
The three initiatives, which are the most recent in a series of measures adopted by the Commission since 2018, respond to two main objectives: to engender greater trust in AI among European citizens and to foster innovation and investment in the field. The first objective is sought to be achieved through the introduction of proportionate and flexible rules that address the specific risks posed by AI systems and ensure security and respect for citizens' fundamental rights. The second aims to ensure that the Union is a world leader in the development of human-centred, sustainable, secure, inclusive and reliable AI.
Confidence in AI and European excellence in the field are, therefore, the ultimate goals pursued by the Commission through its recent initiatives.
Moreover, the Union aims to shape, through its timely regulatory intervention, the global approach to AI regulation. The same had happened with the General Data Protection Regulation (GDPR), a textbook example of the Brussels effect that sees European regulations become global benchmarks.
The field of Artificial Intelligence has become "of strategic importance, at the crossroads of geopolitics, commercial interests and security concerns", but it is still poorly regulated. In this context, the Union intends to "promote the definition of global AI standards in close cooperation with international partners, in line with the multilateral rules-based system and the values it upholds". Some observers anticipate that the European proposal "will almost certainly influence the ongoing debates within the Council of Europe, which is working on a binding treaty on AI, as well as the OECD, NATO, UNESCO and the Global Partnership on Artificial Intelligence propelled by the G7".
The coordinated plan on AI sets out a series of joint actions by the Commission and the Member States. These include: creating favourable conditions for the development and uptake of AI (e.g. through strategic data and information sharing and investment); promoting excellence in AI at European level; and ensuring that it serves people and provides EU leadership in high-impact sectors and technologies. These actions will be financed, through the Digital Europe and Horizon Europe programmes, from funds channeled through cohesion policy and through the Recovery and Resilience Facility (which makes €134 billion available for digital). Overall, the Union aims to invest €1 billion annually in AI, through the Digital Europe and Horizon Europe programmes, and then catalyse further investments by Member States and private actors to reach an annual investment volume of €20 billion.
The proposal for a regulation on AI, also known as the Artificial Intelligence Act, introduces a risk-based system whereby the rules that apply to each AI system depend on its level of riskiness. In particular, the proposal provides for four levels of risk: unacceptable, high, limited and minimal.
Systems that are deemed to pose an unacceptable risk are prohibited. These include artificial intelligence systems that can manipulate human behaviour and circumvent the free will of users (especially minors); systems that governments can use to assign social ratings to their citizens, as is the case in China; and remote biometric identification systems used in publicly accessible areas for law enforcement purposes, which are in principle prohibited. Exceptionally, however, they may be authorised (by a judicial or other independent body) in cases of the search for a missing child, the prevention of a specific and imminent terrorist threat, and the detection of perpetrators of serious crimes.
High-risk AI systems, on the other hand, must meet stringent mandatory requirements in order to access the single market. In particular, the draft regulation stipulates that the supplier of a high-risk AI system must subject it to a conformity assessment (in which, however, only in some cases an independent third-party body is also involved) before it can be placed on the EU market. In this way it can demonstrate that it meets all the requirements stipulated by European law and that it is suitable for marketing. In order for the system to be authorised, it is necessary that it is accompanied by detailed documentation and clear and adequate information for the user, that the datasets that feed it are of high quality (to avoid discriminatory effects), and that the provider has put in place adequate systems for risk assessment and mitigation, recording of system activities and human oversight. It is up to the national authorities, designated by the Member States, to supervise the implementation of the measures contained in the regulation and to conduct market surveillance activities. In case of violations, the proposed regulation foresees that Member States impose sanctions, which in some cases can amount to more than 30 million €. High-risk AI systems include: all systems intended for use in education and vocational training, employment, labour management and access to employment (e.g. CV selection software), migration management, essential public and private services (e.g. credit provision) and many others.
Limited risk AI systems (e.g. chatbots), on the other hand, are only covered by transparency requirements. The proposal does not introduce any new rules in relation to low-risk systems (e.g. spam filters using AI systems, video games), i.e. most of the AI systems currently in use.
The two proposals for regulations presented by the Commission will now have to be adopted by the European Parliament and the Council of the Union, as foreseen by the ordinary legislative procedure. Nothing is promised. The Parliament has already shown its dissatisfaction with some of the provisions contained in the text proposed by the Commission, through two letters addressed to President von der Leyen, and the Member States may also be divided (between countries more in favour of strict regulation - Germany, for example - and countries inclined to a looser regulation that does not overly burden economic operators).
Translated by: Elena Briasco
 The European AI strategy and the first version of the coordinated plan on artificial intelligence with the Member States were initially published in 2018 and updated on 21 April this year. Subsequently, in 2019, the High Level Expert Group on AI drafted and published the Guidelines for Trustworthy AI. This was followed in 2020 by the publication by the Commission of a white paper on AI, which laid the foundations for the proposals just presented. In conjunction with the publication of this white paper, a public consultation was also launched, which was widely attended. https://ec.europa.eu/commission/presscorner/detail/it/ip_21_1682
 See note 11.