The European Union is speculating the ban of artificial intelligence use across various fields particularly in mass surveillance and social credit scores. An official announcement is expected to arrive by next week. If the proposal becomes a reality, the world will witness the EU taking a rather strong stance on certain AI applications. This will set it apart from the US and China. There will be policing of certain use cases much similar to EU’s digital privacy regulation under GDPR regulation.
The Regulations
Validation and testing might be required for the assessment of high-risk AI systems. Companies involved in the development and sale of prohibited artificial intelligence in the EU as well as those based in other parts of the world will be fined, almost up to 4% of their global revenue.
It is speculated that the proposal will also ban ‘indiscriminate surveillane’ thereby underscoring individual privacy and security and avoiding unnecessary intervention by artificial intelligence systems. Social credit scores might be annulled, which implies that the process of judging a person’s trustworthiness based on social behavior will no longer be valid. There might be a requirement for special authorization for the use of ‘remote biometric identification systems’ like facial recognition. Interaction with an AI system would also require that notifications be sent to the user, once again emphasizing user security and permission. It is also speculated that a “European Artificial Intelligence Board” will be created which will include members from every nation-state who will together decide the high-risk AI systems that need changes and prohibitions.
The most significant section in the draft is the one prohibiting mass surveillance and social credit scores. There are agreements and disagreements concerning the section, with some experts asking for improvement. There are also comments regarding the apparent vagueness and lack of clarity in wording in the draft legalization.
The disagreements also rise from the opinions regarding the regulation of a technology that still has nuances to be explored. Many of them opine that the number of regulations of the technology would handicap its development stopping the nation from reaping complete benefits from the technology.
However, according to Michael Veale, a digital rights and regulations lecturer, University College, London,
“Few tears will be lost over laws ensuring that the few companies that sell safety-critical systems or systems for hiring, firing, education, and policing do to high standards. Perhaps, more interestingly, this regime would regulate buyers of these tools, for example, to ensure there is sufficiently authoritative human oversight.”