EU policy/Consultation on the White Paper on Artificial Intelligence (2020)
Home | About | Statement | Monitoring | Documentation | Handouts | Team | Transparency |
2020
This page contains the questions from the public consultation by the European Commission on its Artificial Intelligence White Paper. It is intended as a working documents for Wikimedians to collaboratively draft Wikimedia's answers to this legislative initiative of the EU.
The EU's survey will remain open until 14 June 2020, but we will take input into account that has been added here until 31 May 2020.
Contribution
[edit]The following documents were submitted by the FKAGEU:
1. Survey answers
2. A paper on IPR and AI
3. A call for "public money, public code" within AI
Introduction
[edit]Artificial intelligence (AI) is a strategic technology that offers many benefits for citizens and the economy. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans and the protection of workers, and in many other ways that we can only begin to imagine.
At the same time, AI entails a number of potential risks, such as risks to safety, gender-based or other kinds of discrimination, opaque decision-making, or intrusion in our private lives.
The European approach for AI aims to promote Europe’s innovation capacity in the area of AI while supporting the development and uptake of ethical and trustworthy AI across the EU. According to this approach, AI should work for people and be a force for good in society.
For Europe to seize fully the opportunities that AI offers, it must develop and reinforce the necessary industrial and technological capacities. As set out in the accompanying European strategy for data, this also requires measures that will enable the EU to become a global hub for data.
Consultation
[edit]The current public consultation comes along with the White Paper on Artificial Intelligence - A European Approach aimed to foster a European ecosystem of excellence and trust in AI and a Report on the safety and liability aspects of AI. The White Paper proposes:
- Measures that will streamline research, foster collaboration between Member States and increase investment into AI development and deployment;
- Policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications.
This consultation enables all European citizens, Member States and relevant stakeholders (including civil society, industry and academics) to provide their opinion on the White Paper and contribute to a European approach for AI. To this end, the following questionnaire is divided in three sections:
- Section 1 refers to the specific actions, proposed in the White Paper’s Chapter 4 for the building of an ecosystem of excellence that can support the development and uptake of AI across the EU economy and public administration;
- Section 2 refers to a series of options for a regulatory framework for AI, set up in the White Paper’s Chapter 5;
- Section 3 refers to the Report on the safety and liability aspects of AI.
Other initiatives
[edit]A number of other EU initiatives address AI.
One of them is the own-initiative (i.e. non-legislative) report by the European Parliament, Framework of ethical aspects of artificial intelligence, robotics and related technologies (2020-04-21 draft); IMCO held a meeting on 2020-05-18.
Section 1 - An ecosystem of excellence
[edit]To build an ecosystem of excellence that can support the development and uptake of AI across the EU economy, the White Paper proposes a series of actions.
In your opinion, how important are the six actions proposed in section 4 of the White Paper on AI?
[edit](1-5: 1 is not important at all, 5 is very important)?
- Working with Member states
- Focusing the efforts of the research and innovation community
- Skills
- Focus on SMEs
- Partnership with the private sector
- Promoting the adoption of AI by the public sector
Are there other actions that should be considered?
Comments
|
---|
|
Revising the Coordinated Plan on AI (Action 1)
[edit]The Commission, taking into account the results of the public consultation on the White Paper, will propose to Member States a revision of the Coordinated Plan to be adopted by end 2020.
In your opinion, how important is it in each of these areas to align policies and strengthen coordination as described in section 4.A of the White Paper?
[edit](1-5: 1 is not important at all, 5 is very important)?
- Strengthen excellence in research
- Establish world-reference testing facilities for AI
- Promote the uptake of AI by business and the public sector
- Increase the financing for start-ups innovating in AI
- Develop skills for AI and adapt existing training programmes
- Build up the European data space
Are there any other actions to strengthen the research and innovation community that should be given a priority?
Comments
|
---|
---
|
Focusing on Small and Medium Enterprises (SMEs)
[edit]The Commission will work with Member States to ensure that at least one digital innovation hub per Member State has a high degree of specialisation on AI.
In your opinion, how important are each of these tasks of the specialised Digital Innovation Hubs mentioned in section 4.D of the White Paper in relation to SMEs?
[edit](1-5: 1 is not important at all, 5 is very important)?
- Help to raise SME’s awareness about potential benefits of AI
- Provide access to testing and reference facilities
- Promote knowledge transfer and support the development of AI expertise for SMEs
- Support partnerships between SMEs, larger enterprises and academia around AI projects
- Provide information about equity financing for AI startups
Are there any other tasks that you consider important for specialised Digital Innovations Hubs?
Comments
|
---|
|
Section 2 - An ecosystem of trust
[edit]Chapter 5 of the White Paper sets out options for a regulatory framework for AI.
In your opinion, how important are the following concerns about AI?
[edit](1-5: 1 is not important at all, 5 is very important)?
- AI may endanger safety
- AI may breach fundamental rights (such as human dignity, privacy, data protection, freedom of expression, workers' rights etc.)
- The use of AI may lead to discriminatory outcomes
- AI may take actions for which the rationale cannot be explained
- AI may make it more difficult for persons having suffered harm to obtain compensation
- AI is not always accurate
Do you have any other concerns about AI that are not mentioned above? Please specify:
Comments
|
---|
|
Do you think that the concerns expressed above can be addressed by applicable EU legislation? If not, do you think that there should be specific new rules for AI systems?
[edit]- Current legislation is fully sufficient
*Current legislation may have some gaps
- There is a need for a new legislation
- Other
- No opinion
Comments
|
---|
|
If you think that new rules are necessary for AI system, do you agree that the introduction of new compulsory requirements should be limited to high-risk applications (where the possible harm caused by the AI system is particularly high)?
[edit]- Yes
- No
- Other
- No opinion
If you wish, please indicate the AI application or use that is most concerning (“high-risk”) from your perspective:
Comments
|
---|
|
In your opinion, how important are the following mandatory requirements of a possible future regulatory framework for AI (as section 5.D of the White Paper)?
[edit](1-5: 1 is not important at all, 5 is very important)
- The quality of training data sets
- The keeping of records and data
- Information on the purpose and the nature of AI systems
- Robustness and accuracy of AI systems
- Human oversight
- Clear liability and safety rules
Comments
|
---|
Information on the purpose and the nature of AI systems (5 - computer scientists and engineers have a right to know what they're building, especially if an #AI system can be used for benign as well as insidious purposes. This information should not only include details about training data, such as whether the data was collected consensually and knowingly, and the purpose of the algorithm, including dual-use applications that may be introduced down the line, but information on the downstream buyers of AI systems.)
|
In addition to the existing EU legislation, in particular the data protection framework, including the General Data Protection Regulation and the Law Enforcement Directive, or, where relevant, the new possibly mandatory requirements foreseen above (see question above), do you think that the use of remote biometric identification systems (e.g. face recognition) and other technologies which may be used in public spaces need to be subject to further EU-level guidelines or regulation:
[edit]- No further guidelines or regulations are needed
- Biometric identification systems should be allowed in publicly accessible spaces only in certain cases or if certain conditions are fulfilled (please specify)
- Other special requirements in addition to those mentioned in the question above should be imposed (please specify)
- Use of Biometric identification systems in publicly accessible spaces, by way of exception to the current general prohibition, should not take place until a specific guideline or legislation at EU level is in place.
- Biometric identification systems should never be allowed in publicly accessible spaces
- No opinion
Comments
|
---|
|
Do you believe that a voluntary labelling system (Section 5.G of the White Paper) would be useful for AI systems that are not considered high-risk in addition to existing legislation?
[edit]- Very much
- Much
- Rather not
- Not at all
- No opinion
Do you have any further suggestion on a voluntary labelling system?
Comments
|
---|
|
What is the best way to ensure that AI is trustworthy, secure and in respect of European values and rules?
[edit]- Compliance of high-risk applications with the identified requirements should be self-assessed ex-ante (prior to putting the system on the market)
- Compliance of high-risk applications should be assessed ex-ante by means of an external conformity assessment procedure
- Ex-post market surveillance after the AI-enabled high-risk product or service has been put on the market and, where needed, enforcement by relevant competent authorities
- A combination of ex-ante compliance and ex-post enforcement mechanisms
- Other enforcement system
- No opinion
Do you have any further suggestion on the assessment of compliance?
Comments
|
---|
|
Section 3 - An ecosystem of trust
[edit]The overall objective of the safety and liability legal frameworks is to ensure that all products and services, including those integrating emerging digital technologies, operate safely, reliably and consistently and that damage having occurred is remedied efficiently. The current product safety legislation already supports an extended concept of safety protecting against all kind of risks arising from the product according to its use. However, which particular risks stemming from the use of artificial intelligence do you think should be further spelled out to provide more legal certainty?
[edit]- Cyber risks
- Personal security risks
- Risks related to the loss of connectivity
- Mental health risks
In your opinion, are there any further risks to be expanded on to provide more legal certainty?
Comments
|
---|
|
Do you think that the safety legislative framework should consider new risk assessment procedures for products subject to important changes during their lifetime?
[edit]- Yes
- No
- No opinion
Do you have any further considerations regarding risk assessment procedures?
Comments
|
---|
|
Do you think that the current EU legislative framework for liability (Product Liability Directive) should be amended to better cover the risks engendered by certain AI applications?
[edit]- Yes
- No
- No opinion
Do you have any further considerations regarding the question above?
Comments
|
---|
|
Do you think that the current national liability rules should be adapted for the operation of AI to better ensure proper compensation for damage and a fair allocation of liability?
[edit]- Yes, for all AI applications
- Yes, for specific AI applications
- No
- No opinion
Do you have any further considerations regarding the question above?
Comments
|
---|
|