Artificial Intelligence (AI) is increasingly considered a generic technology with a wide range of applications across sectors. Based on machine learning, AI systems can learn and adapt to changing environments and act autonomously with minimal human supervision. This autonomous nature has raised ethical concerns about human–machine interaction. In the workplace, AI adoption has been linked to risks including worker privacy, work intensification, and occupational safety and health. Accordingly, global efforts to regulate AI in employment have involved collective bargaining agreements and statutory laws driven by labour market regulators. This paper considers theoretical frameworks on science and innovation policy to analyse the nascent AI regulation in three GCC case study countries: UAE, Qatar, and KSA. The analysis uses qualitative document analysis of 10 government documents, including AI strategies, ethics principles, and adoption guidelines. Findings show AI in the workplace is largely perceived as an opportunity, with the main challenge being the supply of AI talent. Policy instruments focus on education and training, with strong government involvement. Actors in agenda-setting and policymaking are primarily from technology, security, and education epistemic communities. The paper recommends involving labour market regulators to advance the worker rights agenda in the region.

PDF version