Artificial intelligence act – risks for all remain high

AI brain anatomy image by Gordon Johnson Pixabay

(18 December 2023) With the increase in the use of artificial intelligence systems and especially so-called General Purpose Artificial Intelligence like ChatGTP, regulating the use of such systems in society and in the work place is crucial to protect us from unwanted consequences. The European Parliament and the Council reached agreement on the EU Artificial Intelligence Act, 9 December 2023.

ETUC welcomed that the AI Act recognises the high-risk nature of workplace applications of AI.  ETUC has now called for specific protections for workplaces to deal with AI systems.  The trade unions want a human-in-command approach. Employers must be accountable for decisions to workers. ETUC stresses:

  • High-risk classification: AI applications in the workplace are classified as high risk. However, the reliance on self-assessment by the providers is a major flaw.
  • Opening clause: Member states and the European Union have the authority to regulate AI’s workplace use.
  • Transparency for workers: The agreement stipulates that workers and their representatives must be informed when AI systems are deployed in the workplace.

Workers are not fully protected. ETUC calls for a dedicated Directive on algorithmic systems in the workplace. Such a Directive is to uphold the human in control principle and empower trade unions and workers representatives to influence AI implementation decisions.

A start of such regulations was made in the agreement the employers and trade unions reached on digitalisation in central government administrations. It sets out a number of key provisions that, once transposed, would apply to millions of workers. EPSU and the employers have called upon the European Commission to implement the agreement now the cross-sectoral negotiations on telework have failed.

As regards the new Artificial Intelligence Act, Aida Ponce, researcher at ETUI, calls the AI act deregulation in disguise.  She welcomes the obligation to conduct a fundamental-rights impact assessment in the act. This concerns public bodies and private entities which provide services of general interest (hospitals, schools, banks, insurance companies) and deploy high-risk systems.” She warns that some high-risk applications might slip through as beyond the scope of the Act. Critically “the mandatory fundamental rights impact assessments and the ban on biometric-identification systems are undermined by major exceptions and high-risk AI tools could be deployed with the claim of urgency. Emotion-recognition AI systems in the workplace are to be banned yet allowed for safety reasons—where does safety begin and end, and will workers have a real say?” She points out the role given to tech providers and the risks that they might outsource start-up activity to smaller companies thus by-passing the controls of the Act.

The inclusion of the fundamental rights impact assessments was due to lobbying and last minute pressure of civil society organisations that was joined by EPSU.

For the ETUC press release

For previous EPSU work for example on banning biometric surveillance,  on ending the dominance of big tech in the EU, or on the need for a public digital infrastructure

For the position of the European Parliament which heralds

  • Safeguards agreed on general purpose artificial intelligence
  • Limitation for the of use biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations
  • Fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover

Parliament stresses the banned applications. Prohibited will be:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Alas there are law enforcement exemptions.  

The Council position is available here

Also the Council stresses the fundamental rights impact assessments before applications can be brought on the market.