The EU is working to set the scope of strict new rules on various artificial intelligence (AI) technologies such as ChatGPT, the chat robot that has recently been making headlines.
The EU does not yet have regulations on ChatGPT or similar AI systems, but two years ago, the EU Commission prepared the first legislative proposal for the framework of new rules on AI and submitted it to the member states and the European Parliament.
This proposal would introduce some limitations and transparency rules in the use of artificial intelligence systems. If the proposal becomes law, AI systems such as ChatGPT will also have to be used in line with these rules.
The new rules for AI, which are expected to be applied in the same way in all member states, use a risk-based approach.
In the commission proposal, AI systems are classified in four main groups: unacceptable risk, high risk, limited risk, and minimum risk.
AI systems considered to be a clear threat to people’s life safety, livelihoods, and rights, are in the unacceptable risk group. The use of systems in these areas is expected to be banned.
AI systems or applications that go against the free will of individuals, manipulate human behavior, or perform social scoring are also prohibited.
Next, falling into the high-risk group are such areas as critical infrastructure, education, surgery, CV evaluation for recruitment, credit ratings, evidence, immigration, asylum and border management, travel document verification, biometric identification systems, and judicial and democratic processes.
Strict requirements would be imposed on AI systems in the high-risk group before they are released to the market. These systems must be non-discriminatory and results must be observable and subject to adequate human oversight.
Under the rules, security units will be able to use biometric identification systems in public areas in special cases such as terrorism and serious crimes. However, such uses of AI systems will be limited and subject to permission from judicial authorities.
The systems in the limited risk group will also have to comply with certain transparency obligations.
In the proposal, chatbots are also in the limited risk group. The goal is to make users conversing with chatbots aware that they are interacting with a computer.
Applications such as AI-supported video games or spam filters are in the minimum risk group. Artificial intelligence systems in this group, which pose little or no risk to the rights or safety of individuals, will not be interfered with.
The proposal includes fines of up to €30 million ($32.9 million) or 6% of their global profits for violators.
Work on the artificial intelligence law, which requires the approval of the EP and member states to enter into force, is continuing.
Source: aa