About four months ago, European authorities presented the first comprehensive draft law on artificial intelligence on a global scale, known as the 'Artificial Intelligence (AI) Regulation'. Given the immensity of unknowns that open up with the evolution of this technology, the challenge was, at least, to try to protect the fundamental rights of people in the use of AI.
Surprisingly, given the speed with which we have sometimes seen some legislative initiatives move forward, on March 13 the European Parliament approved the new artificial intelligence regulation to guarantee the security and rights of citizens. This agreement is a first step that should serve as a precedent for a new global standard for regulating AI that could arrive this very 2024 and would enter into force in 2026.
The European regulation focuses on the risk of technology on fundamental rights, democracy, the rule of law and environmental sustainability. It does not want to limit the advancement of technology, with foreseeable beneficial effects, but rather to raise certain red lines when there are certain risks. It thus establishes a classification of AI systems according to the level of risk they present. There are three categories:
Unacceptable risk: AI systems that pose a direct threat to public safety, fundamental rights or privacy. Their use is strictly prohibited, except in very exceptional situations.
High risk: includes AI systems that could have a significant impact on the fundamental rights of individuals, such as those related to services and processes that affect health, safety or employment. Their use is permitted, but with additional safeguards and monitoring of their operation.
Low risk or non-existent risk: when it is none of the above. That is, it is based on the ability of citizens to make free, informed, voluntary and unequivocal decisions to use these technologies, such as generative AI.
One of the prominent topics is, of course, biometrics. The key here is that the user knows the system and decides freely about its use. Thus, biometric recognition applications that involve active and conscious participation of the user, demonstrating their knowledge and approval, will be considered low risk. This approach is very important, as it puts all the weight on the importance of consent in the digital age.
In general, organizations must provide for ethical and responsible systems that allow informed and unequivocal consent from people, who must also have mechanisms for controlling and defending their rights regarding these treatments.
It is a precedent for how emerging technologies can be regulated effectively and ethically, without sacrificing innovation. We will continue to be vigilant.