The surge in telework in response to the COVID-19 pandemic highlighted the importance of and increased role that digitalization can play in transforming work and the delivery of services. The European Union has recognised this and already adapted its digital strategy, published in February 2020, by setting out plans for the digital decade until 2030. This has major implications for workers and the public services and includes three key legislative proposals – the Digital Services Act, the Digital Markets Act and the Artificial Intelligence Act.
Objectives for 2030
To be more competitive globally and better prepare for future issues, such as the climate change and healthcare challenges, the EU has proposed an important digital initiative aimed to accelerate digital development and safety called the "EU digital decade 2030". The four key development areas, known as EU's digital compass, to lead the digital transition:
- a digitally skilled population and highly skilled digital professionals,
- secure and substantial digital infrastructures,
- digital transformation of businesses and
- digitalization of the public sector.
The Commission’s Digital Compass sets out the key policy areas to ensure these goals are met include cloud computing, artificial intelligence, digital identities, data, and connectivity. There are also some targets for the digital decade covering digital skills, infrastructure, businesses and public services.
A digitally skilled population means that by 2030 at least 80% of all adults should have basic digital skills and there should be 20 million ICT specialists in employment in the EU (compared to 7.8 million in 2019) with convergence between women and men. There are several targets to boost secure and substantial digital infrastructures, including reaching 100% coverage of European households with gigabit connectivity and 5G; European production semiconductors representing at least 20% of world output in value; a major boost to data processing capacity and a first quantum accelerated computer.
The digital transformation of businesses will aim to have three out of four companies using cloud computing services, big data and Artificial Intelligence; more than 90% of small and medium-sized enterprises reaching at least a basic level of digital intensity; and about 250 unicorns (start-ups with $1bn in value) in the EU.
The digitalizaion of the public sector focuses on three main points: all key public services should be available online; all citizens to have access to their e-medical records; and 80% of citizens using a digital ID solution.
As a part of the next long-term EU budget (2021-2027), the Digital Europe Programme will provide strategic funding to support these challenges and goals. With a planned overall budget of €7.5 billion (in current prices), it aims to accelerate the economic recovery and shape the digital transformation of Europe.
An analysis by the consultants Deloitte argues that the Digital Decade targets are ambitious and will require high levels of improvement in each of the targeted areas over the coming years. If the current trajectory of progress towards the Digital Decade targets continues, by 2030 the EU may not achieve many of the targets that have been set.
The Next Generation EU initiative involves the EU in allocating €750 billion in grants and loans to member states who have to submit national recovery and resilience plans to access the support. Each plan must ensure that 20% of the value of the proposed projects are for digitalizaion and of the 22 plans so far submitted and analysed by the European Commission, the share going to digitalizaion is just over 26% at €117 billion. Of this, €43 billion is targeted at the digitalizaion of public services.
Digital Services Act
On 5 December 2020, the European Commission presented a Digital Services Act package with two drafts of legislation, a digital services act (DSA ) and a digital markets act (DMA). The main “task” of these proposals is to create a fairer playing field and make online platforms more responsible for published content. The DSA has the main objective to regulate illegal and harmful content posted on intermediary platforms and to guarantee the universal protection of rights and clear obligations for businesses and consumers across the internal digital market.
The term “digital services” is very broad. It refers to ordinary websites, social networks, online marketplaces and big infrastructure services etc. However, the DSA rules primarily regulate online intermediaries and platforms such as online marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms with a significant share in the market. As the European Parliament (EP) briefing on the DSA explains:
“Such platforms have become important players in the 'attention economy' that is at the core of their business model – as they match users with the most relevant information for them and often attempt to monetise the process by means of advertising or transactions – and are coming under increasing scrutiny given the systemic societal risks they pose.”
It is particularly concerning that harmful and illegal content is often driven by manipulative algorithms. For example, a study by the Mozilla Foundation in 91 countries revealed that YouTube, the biggest video streaming service, uses an algorithm that systematically exposes users to misinformation (fake news), unhealthy body images and bizarre and radical theories. “The more stories we read, the more we realized how central YouTube has become to the lives of so many people — and just how much YouTube recommendations can impact their wellbeing”, concluded Mozilla Foundation.
The EP briefing clarifies that the current EU rules on digital services have remained largely unchanged since the adoption of the e-Commerce Directive in 2000, while digital technologies and business models continue to evolve rapidly and new societal challenges have emerged. It is important to emphasize that new legislation will not replace, but complement the e-Commerce Directive and related regulations to make clearer rules on transparency, accountability and safety.
More specifically, the DSA will require:
- enhanced online advertising transparency requirements for online platforms;
- establishing notice and action mechanisms to fight illegal content online;
- traceability of business users in online marketplaces to fight the sale of illegal goods;
- very large online platforms to prevent the dissemination of illegal content and societal harms;
- safeguards that would allow users to challenge platforms’ content moderation decisions;
- a framework for data of key platforms to be made accessible to researchers for audit and risk assessment purposes; and
- online intermediaries that are established outside the EU to appoint a legal representative.
An overview of issues raised by the European Data Services Supervisor points out its concerns about whether the legislation goes far enough arguing that the DSA needs to be strengthened to better protect individuals, particularly with respect to content moderation, online targeting and recommender systems used by online platforms (including social media and marketplaces).
EU member states will have a key role in monitoring DSA applications and must appoint a DSA coordinator for that purpose. The legislation also introduces fines of up to 6% of the online intermediary services providers’ global turnover.
In September this year EPSU joined Amnesty International, Corporate Europe Observatory and over 70 other human rights and campaigning organisations in calling for changes to the DSA so that it better protects people’s rights and the public interest. The coalition, which the ETUC has also now joined, wants to see measures to
- turn off the manipulation machine – the system of algorithm-driven recommendations that amplify hate speech and disinformation;
- stop surveillance for profit – the process driving advertising to those who never asked for it; and
- put people back in charge – robust powers for regulators and auditors to hold big tech to account.
As at October 2021, the Slovenian presidency of the EU was working on a compromise proposal on the DSA while the process in the European Parliament was underway led by the Internal Market and Consumer Protection Committee (IMCO) where a vote was expected on 8 November, with the possibility of a vote in the EP plenary in December.
Digital Markets Act
The second piece of legislation from the DSA package is Digital Markets Act (DMA) with the main intention to harmonize rules in the digital market to promote innovation and competition. It targets particularly so-called gatekeeper online platforms – those with a systemic role in the internal market that function as bottlenecks between businesses and consumers for important digital services. These include search engines (Google), social networks (Facebook) and online marketplaces (Amazon). The Commission is worried about the impact these companies are having both on competition and consumer rights.
The European Commission argues that:
“The accelerating digitalization of society and the economy has created a situation where a few large platforms control important ecosystems in the digital economy. They have emerged as gatekeepers in digital markets, with the power to act as private rule-makers. These rules sometimes result in unfair conditions for businesses using these platforms and less choice for consumers”.
In its analysis of the DMA, the Bruegel research group highlights how past experience shows that early competition with the big players didn’t last long with Google replacing AltaVista after just one year while Facebook outcompeted MySpace after only three years of existence. The report provides further clarification:
“Consider the market for mobile operating systems (OS). OS with more end-users is naturally more attractive to app developers than OS with fewer end-users. Developers thus tend to prioritise the largest OS (an example of indirect network effects). Over time, the gap in what larger and smaller OS can offer grows. The large OS gathers more user data which helps them improve the quality of their recommendations. The small OS becomes even less attractive until they go bust and the winners take all. One of the reasons Microsoft abandoned the mobile market in 2017 is that it could not attract enough app makers to its OS.”
To force fair competition on gatekeepers, the DMA proposes several strict rules:
- no self-preferencing: a prohibition on ranking their own products over others;
- data portability: an obligation to facilitate the portability of continuous and real-time data;
- no ‘spying’: a prohibition on gatekeepers using the data of their business users to compete with them;
- interoperability of ancillary services: an obligation to allow third-party ancillary service providers (eg payment providers) to run on their platforms; and
- open software: an obligation to permit third-party app stores and software to operate on their operating system.
The DMA also introduces fines of up to 10% of the companies’ global turnover. Its progress through the institutions mirrors that of the DSA.
Artificial Intelligence Act
In April 2021, the European Commission published its draft Artificial Intelligence Act. The proposal follows the ethical guidelines for trustworthy artificial intelligence adopted by an independent expert group but has been strongly criticised by the ETUC for failing to properly address workplace concerns. The regulation defines an artificial intelligence system as software that is developed with one or more of the techniques and approaches (listed in Annex I) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
However, the global competition is huge and the European Union has recognised the scale of the challenge: “Europe needs to invest at least 20 billion euros ($24 billion) in artificial intelligence research by 2020 and should pledge the same amount each year, the European Union said as it tries to stoke innovation in a bid to catch up with the U.S. and China.”
The main goal of the EU is to ensure the successful and competitive development of the AI systems harmonized with the EU values and fundamental rights. So, with such a risk-based approach, the Commission welcomes AI development, but with a view to avoid risks related to people's rights, health and safety. It identifies four levels of risk covering all AI systems: minimal, limited, high and unacceptable.
We can encounter systems with a minimal and limited risk almost every day (video games, deep fake systems, etc.) and they have to meet minimum requirements, which mainly relate to transparency and responsible data management. On the other hand, systems with higher health, safety and human rights risks would face stricter obligations and supervision. This applies in particular to the application of (high-risk) sophisticated AI systems, for example in transport (e.g. self-driving cars), in the recruitment process (e.g. recruitment ranking systems), obtaining basic private and public services (e.g. social scoring systems, loan grants etc.).
The last category of AI systems would, according to the proposal of the Regulation, involve unacceptable risk and thus be completely prohibited. This applies, for example, to social-scoring systems that have taken root in China, where the authorities sanction and reward its citizens for socially (un)desirable behaviours.
AI and the implications for labour
Depending on how rapid the AI implementation will be, a report by the McKinsey Global Institute (MGI), Jobs lost, jobs gained: Workforce transitions in a time of automation, predicts that 400 to 800 million jobs will be automated worldwide by 2030 while between 555 to 890 million jobs will be created.
This clearly implies a major shock for the global labour market with, as the MGI report indicates, much of the automation process taking place in the EU, USA and China. The report warns of the likelihood of increased inequalities between and within countries and so underlines the importance of regulating the transition.
The European Trade Union Confederation (ETUC) recognises that (re)-training is vital but in its resolution on the European strategies on the AI and data it argues that more needs to be done:
“Investment in educating and up-skilling/re-skilling is therefore of utmost importance. Education policies aimed to better equip workers with the skills and competencies needed to design and operate AI systems are crucial, however, they will not be sufficient. Market dominance and market concentration of a handful of digital firms developing AI technologies and investing in AI ventures is a concern. Furthermore, tax policies should provide for a more balanced level playing field among companies, so as to allow AI technologies and their benefits to be shared more equally.”
The ETUC is also very concerned about the AI Act’s failure to address the workplace dimension and it’s resolution goes on to make the point that:
“an EU framework on AI should address the workplace dimension in an ambitious and proactive manner because workers are particularly concerned by AI technologies. The imbalance of power between employers and workers should lead the EC to consider a robust AI framework to create quality jobs, invest in worker’s AI literacy, promote and increase the safeguarding of workers’ rights, workers’ protection and ensure that trade unions and workers’ representatives participate actively in shaping AI at work. Such an AI framework should cover all workers and employers in the private and public sectors, for all business models including online platforms.”
AI act, therefore, can be just a useful tool for the market competition, far away from European (social) values.
Another alarming aspect of AI is its use in workers’ surveillance and systems aimed at organizing work processes and measuring the efficiency of every individual worker. This is very common in gig-economy and platform work where such data can be combined with customer rating systems to control (allegedly self-employed) workers. AI monitoring techniques are also present in white-collar jobs and can cover emails, phones, computer content, video monitoring and GPS tracking.
But the aim of such data collecting is not just to boost productivity. As Valerie Stefano argues in an International Labour Organisation working paper, it can easily be abused on the way that it can:
“lead to very severe intrusion into workers’ private life and materially infringe their privacy, by allowing management to access to extremely intimate information, including, for instance, through the use of data based on medical insurance claims on the intention to become pregnant and on the possibility to develop sickness.”
Citing a case that went before the European Court of Human Rights, Stefano shows that the right to private life enshrined in article 8 of the European Convention on Human Rights can provide some protection and limit the employers’ scope to monitor a worker’s online activities.
If the AI act fails to protect workers and their human rights, it will just be seen as a tool for the market competition of big tech companies.
Digitalization and public services
Public services such as health, education, waste management etc. are essential for all. In the last decade digitalization of these services has hardly been questioned. It is almost as though, everything with the prefix “e-“ or “smart” is perceived as mandatory and desirable.
The EU has strong ambitions to encourage further digitalizaion of public services, arguing that this will make governance processes more efficient and transparent for citizens and businesses. There are also potential economic benefits, along with increased public service efficiency, increasing trust in public institutions and empowering political participation. But on the other hand, it is very important to implement digital technology in the right and fairway to avoid negative impacts on citizens and public service workers.
The European Commission has a scoreboard of indicators that measure progress in e-government and the provision of online public services. The “champions” among digitized nations are Estonia, Spain, Denmark, Finland and Latvia, all of which have scores greater than 85. On the other hand, Romania, Greece, Croatia, Slovakia and Hungary all score less than 60 and significantly below the EU average of 72.2.
The Public Service International (PSI) report, Digitalizaion and public services from a labour perspective, highlights the pros and cons, indicating how:
“digital technologies can improve public service quality and access and contribute to democratic accountability and citizens’ trust in public institutions, while advancing workers’ occupational health and safety. At the same time, it shows those same technologies can open the door to public service privatization, create a dangerous dependency of public institutions on private digital technology providers, and deepen inequalities among public service users.”
The real challenge is to ensure positive outcomes for the public interest and well-being. For everyone to benefit from digital public services they need the appropriate skills and equipment. However, with over 92 million people (21% of the population) in the EU at risk of poverty or social exclusion, many will not be able to access digital public services. While older people usually need public services the most they are more likely to be poor and digitally illiterate. So, it is necessary to fulfil social and economic pre-conditions to avoid the increase of inequality during the digitalization process.
Transparent and safe data management is also necessary to avoid data leaking and misuse. The experience of the private health sector in the US shows how patients can lose out, denied insurance when their data is sold to insurers who cherry pick the healthy in order to maximise profits.
A safe, secure and transparent infrastructure is crucial to protect personal data, particularly when it relates to health. However, this can be costly, with big tech companies queuing up in the hope of securing lucrative contracts. This underlines the need for a strong regulatory framework.
While digitalization can have a range of positive effects on jobs and services, the PSI report warns of potential drawbacks:
“Digitalization can also cause higher levels of working time, excessive workload, work intensification and stress, and increased management surveillance, especially if the introduction of new technologies is not properly prepared, implemented and monitored with the active participation of workers and their representatives. In addition, the over-reliance on computer-based services can be demeaning for workers and lead to a loss of motivation as their professional and social skills and their decision-making power become redundant or are undermined.”