27 June 2020
The future artificial intelligence regulatory framework should focus on the "risk approach" to avoid disproportionate corporate burdens while achieving established goals. Therefore, it is necessary to conduct type supervision from two aspects: high-risk artificial intelligence applications and non-high-risk artificial intelligence applications.
The high risks of artificial intelligence applications should be satisfied. First, in an area where significant risks may occur, artificial intelligence applications in this area should be considered high risk. Such as medical insurance, transportation, and some public domains.
Second, the artificial intelligence is considered high risk only if it is improperly used in the relevant field and generates significant risks. For example, health care is a high-risk area for artificial intelligence applications, but the shortcomings of hospital appointment systems usually do not cause significant risks to cause legislative intervention.
In order to effectively deal with various high-risk artificial intelligence applications, the following requirements should be considered.
High security: ensure that the training data of the artificial intelligence system is continuously safe in the subsequent use of the product or service. For example, artificial intelligence systems are trained on a sufficiently wide data set to cover all possible application scenarios, including the avoidance of dangerous situations in unexpected situations.
Non-discrimination: Take reasonable steps to ensure that artificial intelligence does not produce discriminatory results. For example, the use of sufficiently representative data sets is widely used to ensure that factors that may cause discrimination, such as gender, race and color, are covered.
Protection of personal privacy: During the use of artificial intelligence products and services, personal data and privacy protection that strictly complies with GDPR requirements.
Accurately record data sets and tests: Given the complexity of artificial intelligence systems, the opacity of internal operations, and the difficulty of quickly verifying compliance, it is important to ensure that test records related to algorithms and programming are retained. Accurately record the data set of high-risk artificial intelligence application tests in specific situations, and describe the main characteristics and selection methods of the data set in detail.
Retention data files: Retain data files on training methods, use procedures and related technologies for establishing, testing and validating artificial intelligence systems, including pre-established security measures and countermeasures to avoid discrimination bias.
System information: In order to enhance human-machine trust and build responsible and trusted AI, artificial intelligence systems need to provide necessary information on high-risk artificial intelligence applications in a timely and proactive manner. For example, the system actively provides its own application scope, capability level, and design limitations, especially related to the preset purpose of the system, expected effectiveness, and corresponding accuracy.
Clear notice: When an individual interacts with an artificial intelligence system rather than with a person, he or she should fulfill the obligation to notify. The system can provide objective, concise and easy to understand accurate information according to different scenarios.
Stability and accuracy: It is necessary to ensure that the artificial intelligence system has mature and stable technology throughout the life cycle of the product or service, and accurately reflects the corresponding operating level.
Reproducible: to ensure that the results generated by the artificial intelligence system are reproducible and repeatable after the fact.
Resistance: Throughout the technology life cycle, artificial intelligence needs to have the ability to handle system errors or inconsistent instructions on its own. At the same time, when facing the severe situation of blatant invasion from the outside, or the hidden situation of subtle manipulation of data or algorithms, artificial intelligence must have a certain degree of tough resistance, and take temporary measures to block existing violations.
Human oversight helps ensure that artificial intelligence systems do not erode human autonomy. Only by ensuring that humans are properly involved in high-risk applications of artificial intelligence can we achieve reliable, ethical, and human-centric artificial intelligence goals. Artificially supervised artificial intelligence systems can take the following forms.
First, the results output by the artificial intelligence system can only take effect after prior human supervision and approval. For example, the rejection of an application for social welfare is a matter that concerns the vital interests of vulnerable groups, and can only be operated by humans.
Secondly, the output of the artificial intelligence system results take effect immediately, but it needs manual review afterwards. For example, artificial intelligence systems can reject credit card applications by themselves, but they must be manually reviewed later.
Third, the artificial intelligence system is monitored at any time when it is running, and it promptly intervenes when it encounters an emergency. For example, in autonomous driving, the driver's synapse operating system is not safe, and the stop button can be used immediately.
Fourth, in the early design stage, additional operational restrictions are imposed on artificial intelligence systems in advance. For example, when outdoor visibility is low, the driving system will automatically stop as soon as the sensor of the driverless car is no longer sensitive or a safe distance needs to be maintained.
At present, many countries in the European Union have widely applied face recognition systems in public places and commercial scenes to collect and use citizens' biometric data by remotely identifying faces. To address the social concerns caused by the widespread use of artificial intelligence in public places and to prevent fragmentation of the European single digital economy market, the European Commission will launch an extensive public debate to discuss possible reasonable use cases in this situation, and What measures should be taken to ensure that the relevant behaviors are in accordance with the principle of proportionality and necessity?
Prior assessment of high-risk artificial intelligence applications needs to be incorporated into future artificial intelligence regulatory framework mechanisms. The consistency assessment mechanism needs to consider the following factors.
First, keep in mind that not everything can be assessed for consistency in advance. For example, the scenario of providing the necessary information when applying high-risk artificial intelligence will not allow prior evaluation and verification.
Second, considering that some artificial intelligence systems will continue to evolve and deep learning. It will be necessary to repeatedly and regularly evaluate the life cycle and operating status of artificial intelligence systems.
Third, aim at the training data, the training methods used in testing the artificial intelligence system, the use of processes and related technology data files, objectively and professionally carry out consistency verification and verification work.
Fourth, all market operators are required to conduct a consistency assessment. To reduce the burden on SMEs, institutions such as digital innovation centers can be created.