![]() ![]() Emotion recognition systems in law enforcement, border management, workplace, and educational institutions and.Predictive policing systems (based on profiling, location or past criminal behaviour).gender, race, ethnicity, citizenship status, religion, political orientation) Biometric categorisation systems using sensitive characteristics (e.g.“Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization.“Real-time” remote biometric identification systems in publicly accessible spaces.MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as: AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics). ![]() The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. Risk based approach to AI - Prohibited AI practices They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow. In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. On Thursday, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence with 84 votes in favour, 7 against and 12 abstentions. ![]()
0 Comments
Leave a Reply. |