OECD countries just might be on the brink of an AI revolution. While adoption of AI is still relatively low in companies, rapid progress with generative AI (e.g. ChatGPT), falling costs, and the increasing availability of workers with AI skills mark a technological watershed for labour markets.
This is the assessment of the Organisation for Economic Co-Operation and Development (OECD) in its Employment Outlook 2023, released in July. As the organisation’s annual snapshot of labour markets across the world’s largest economies, it usually nails one focal point for the future. No surprise then that AI, which has been dominating the headlines since the start of the year, features as the main topic for 2023.
When considering all automation technologies, including AI, the OECD finds that 27% of jobs are in occupations at high-risk of automation. Initial conclusions from its new survey of AI’s impact on the manufacturing and finance sectors in seven countries highlight both opportunities and risks. On the positive side, the report says, AI can help reduce tedious and dangerous tasks, leading to greater satisfaction and safety. It also identifies a positive impact in terms of fairness in management and inclusion of disabled workers. Yet, 63% of workers in finance and 57% in manufacturing worry about job losses over the next 10 years due to AI.
Despite uncertainty about the evolution of AI in the short- to medium-term, the OECD recommends concrete policy actions to reap the benefits AI can bring to the workplace while addressing risks to workers’ fundamental rights and well-being. Certain jurisdictions, including the European Union, have already started regulating AI (e.g. EU Artificial Intelligence Act and data protection regulation) and the OECD also points to collective bargaining and social dialogue as important tools to support workers and companies in the AI transition.
As one example of social partners’ initiatives around AI, the OECD Employment Outlook 2023 quotes the code of conduct adopted by the World Employment Confederation in March 2023. Seeing the rapid deployment of AI in recruitment processes over the past few years, our sector deemed it essential to take an early stand in defining a set of standards that we could align on.
As a result, our Taskforce on Digitalisation led a cross-industry collaboration resulting in the adoption of a Code of Ethical Principles for the use of Artificial intelligence. It defines ten principles that members are required to apply in developing products, delivering services, and engaging partners when using AI.
AI offers strong potential to support both workers and employers in their labour market journeys. It plays a role in ensuring better and faster matching of supply with demand, improving the user experience, grounding labour markets in skills, and unlocking the data needed to do it. However, as with the introduction of any new technology or system, we need to ensure that the use of AI in the HR services sector is grounded in principles that place the needs of individuals and society at their heart.
Our Code recognises that AI is evolving, and so represents a set of 10 living principles which can be adapted over time. Unsurprisingly several principles focus on the need for human characteristics in AI systems used in the recruitment and employment industry: Human Centric Design – that provides beneficial outcomes for individuals and society; Human in Command – in order that they are designed to augment human capabilities with clear processes in place to ensure that they always remain under human direction and control; and Building Human Capacity – enhancing workers and managing fair transitions through the implementation of life-long learning, skills development, and training.
Other principles focus on the need for openness and responsibility. Transparency, Explainability, and Traceability – to ensure that those using AI systems are transparent about their use of technology and provide workers and employees with information about their interactions with AI systems, explaining how these systems arrive at their decisions; and Accountability to ensure that those deploying AI systems take responsibility for their use at all times.
The 10 principles also address protection of people and systems: Privacy requires that AI systems used by the recruitment and employment sector comply with the application of general privacy principles and protect individuals against any adverse effects of the use of personal information in AI; Safety & Security ensure that systems are technically robust and reliable, with monitoring and tracking processes in place to measure performance and retrain or modernise as necessary. Naturally, ethical governance also features as a principle, with WEC encouraging frameworks to ensure the ethical development and use of AI – including the involvement of relevant stakeholders such as government, civil society, and academia in the decision-making process.
Two further principles focus on broader societal objectives: Fairness and Inclusivity by design seeksto ensure that the AI systems used by the sector treat people fairly and respect the principles of non-discrimination, diversity, and inclusiveness. It requires that appropriate risk assessment and mitigation systems be implemented throughout the AI system lifecycle. Environmental and societal well-being aims to ensure that AI systems are designed and used in a way that considers the environmental and societal impacts of their use.
At the core of our principles lies the need to keep a human-centric approach to artificial intelligence and lay the foundations for building better labour markets. The OECD Employment Outlook 2023 rightfully flags, trustworthy use of AI is key. As organisations move ahead and embrace AI across their business Governments need to ensure that it continues to serve to support inclusive labour markets as opposed to hindering them, thereby bringing opportunities for all.
...
By
Denis Pennel
Managing Director, World Employment Confederation