6 June 2025
The Institute for the Future of Work’s longstanding research into the impacts of AI and automation on working lives shaped a major contribution to the Employment Rights Bill committee stage debate in the House of Lords in June 2025.
Tabled by Lord Tim Clement-Jones, the amendment introduced a clear statutory definition of “AI System” in the context of employment rights and proposed a new legal duty for employers to conduct Workplace AI Risk and Impact Assessments (WAIRIAs).
WAIRIAs build on IFOW’s Good Work Charter and the Pissarides Review’s framework for understanding automation—not as a singular process of substitution, but as a range of changing human–machine relationships, including augmentation, intensification, matching and displacement.
Drawing on this model, the amendment recognised that algorithmic systems in hiring, shift scheduling, performance management or pay setting pose distinct risks to employment rights, conditions, and access to decent work.
The proposed WAIRIA framework sets out a new anticipatory regime: requiring employers to assess the purpose, functionality, and potential harms of AI systems before deployment, and to consult those likely to be affected. Crucially, it establishes a duty to monitor impacts over time and respond to emerging risks, rather than relying solely on redress after harm has occurred.
The amendments and debate contributions are published here, and appended below:
“48: After Clause 34, insert the following new Clause—
“Definition of AI System(1) For the purposes of this Act, “AI System” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as content, predictions, forecasts, recommendations, or decisions influencing real or virtual environments. (2) AI systems may operate with varying degrees of autonomy and include use of algorithmic techniques such as natural language processing, large language models, multi-modal models, machine learning, speech or image recognition, neural networks, deep learning, or decision trees.”Member’s explanatory statement
This amendment clarifies the definition of an AI System within the context of employment rights as an engineered system generating outputs from inputs using algorithmic techniques.
Lord Clement Jones:
“My Lords, first of all, I must make my apologies that this is my first contribution to the Bill. I have waited until day 7—I am not quite sure that that is entirely my fault—but it is a pleasure to speak in this group, particularly as I know that the noble Lord, Lord Holmes, is on the same page, even if he has put forward a different set of amendments.
In moving Amendment 148, I will also speak to Amendments 149 and 150. I hope that these amendments are of interest to the Committee; they are certainly close to my heart. They address the profound and rapidly evolving impact of artificial intelligence systems on the modern workplace. Reports by the Institute for the Future of Work and the All-Party Group on the Future of Work paint a clear picture: the wide spread of AI at work is transforming lives and livelihoods in ways that have plainly outpaced or avoid the existing regimes per regulation. The impact of AI will be profound and, although there are potential benefits, there are also significant risks or impacts on employment rights and conditions in the workplace. We must make sure that AI benefits are realised but also that the detriment is avoided.
As the All-Party Group on the Future of Work found, there is an urgent need to bring forward robust proposals to protect people and safeguard our fundamental values in the workplace. Existing regulatory frameworks are strained. Technical approaches commonly deployed before deployment of algorithmic systems are often inadequate. That is why a systematic framework for accountability is urgently required.
The workplace AI risk and impact assessments—WAIRIAs, as we have coined them—proposed by these amendments, are intended to provide such a framework. As the Institute for the Future of Work and others have argued, mandating such regimes of impact assessment is a practical response to a deficit of responsible foresight.
It is important for WAIRIAs to be made a legal requirement and for accompanying guidance to be issued to outline a framework. Amendment 148 defines what constitutes an “AI System” in this context as:
“an engineered system generating outputs from inputs using algorithmic techniques”.
That very clear definition ensures we are all addressing the same technology when discussing its regulation.
Amendment 149 introduces the cornerstone requirement for workplace AI risk and impact assessments. This amendment mandates that:
“Before implementing or developing an AI system which may have significant risks or impacts on employment rights and conditions in the workplace, an employer must conduct a workplace AI risk and impact assessment”.
The rationale for this is crucial. AI systems can have a potential significant risk or impact on areas vital to workers, including:
“the identification or exercise of rights … work access or allocation … remuneration or benefits … contractual status, terms or conditions …”
and even
“mental, physical or psychosocial health”.
Without a mandated pre-deployment assessment, these significant impacts could go unexamined and unmitigated. Mandating pre-emptive impact assessments shifts the regulatory emphasis to active anticipatory intervention, moving away from limited retrospective evaluation.
The amendment details the scope and triggers for these essential WAIRIAs. They must be conducted not only before the implementation or development of an AI system, but also
“at least once every 12 months … whenever substantial changes are made to the AI system”
or, crucially,
“when evidence emerges of unforeseen significant risks or impacts”.
This requirement for ongoing monitoring and review is essential, because impacts may become apparent only over time.
Subsection (1) of the proposed new clause specifies the content of a WAIRIA. It must:
“document the intended purpose and functionality of the AI system … establish a process for undertaking the monitoring of significant risks and impacts”
and
“document the definitions, metrics and methods selected”.
The vital component of Amendment 150 is the requirement to make provision for consultation with the individuals, groups and authorised representatives who are likely to be affected. This aligns with the understanding that meaningful mechanisms to incorporate the voice, interests and perspectives of those affected by AI are necessary and that workers should always be treated as key stakeholders.
The assessment must also assess the significant risks and impacts likely to be produced, and identify the mitigations, adjustments, training or other measures made in response. The explanatory statement to Amendment 149 confirms the intention to
“document and mitigate the potential risks and impacts … before deployment, including consultation and regular review”.
The Institute for the Future of Work’s research underscores that the significant impacts on good work principles are often rarely appreciated or prioritised in the design and deployment of AI at work—those principles are fundamental values and rights. The Good Work Charter by the institute could serve as a valuable checklist to consider potential impacts on work and workers and help to integrate what can be described as sociotechnical considerations into the assessment process. Mandating the risk assessments would support responsible and better innovation.
To ensure employers can carry out these assessments effectively and consistently, subsection (5) of the new clause proposed in Amendment 149 requires the Secretary of State to require the
“Fair Work Agency to issue guidance on the conduct disclosure and enforcement of WAIRIAs within 6 months of this section coming into force”.
That guidance is crucial, as existing tools have not been built for this purpose and there is a need for clear regulation with better tools and guidance to support practical applications.
I repeat that these amendments are not about stifling innovation; they are about ensuring that the introduction of AI into the workplace is managed responsibly and does not undermine the fundamental rights, conditions and well-being of workers. By requiring employers to proactively assess potential risks, mitigate them and consult with those affected, we could help build a foundation of trust and ensure that the future of work is one that benefits everyone. I beg to move.
Baroness Jones: The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Information and Technology:
My Lords, I thank the noble Lord, Lord Clement-Jones, for his Amendments 148, 149 and 150; the noble Lord, Lord Holmes of Richmond, for his Amendments 289, 290, 291, 292, 293, 294, 295, 296, 298, 315 and 316; and the noble Baroness, Lady Bennett, for her Amendment 323B. I thank them for generating an important debate on these issues. I thank my noble friend Lady O’Grady for her wise words on this issue.
I will take the amendments in turn. Amendments 148, 149 and 150 seek to introduce mandatory AI risk assessments in the workplace where there are significant impacts on workers, and would place a requirement on employers to consult employees and trade union representatives before implementing AI systems that might significantly impact employment rights and conditions. I thank the noble Lord, Lord Clement-Jones, for his Amendments 315 and 316, which would establish an independent commission on AI in the workplace and a project to investigate the potential challenges posed by the algorithmic allocation of work by employers. Amendment 323B, tabled by the noble Baroness, Lady Bennett, proposes a government review of the electronic monitoring of workers in the workplace. I agree with her that the cases that she cited were completely unacceptable.
As noble Lords will be aware, under data protection law employers are required to fulfil obligations as controllers if they collect and use their employees’ personal data. This includes the provision of meaningful information to the workers when collecting their personal data if any decisions about them having a legal or similarly significant effect will be based solely on automatic processing. Furthermore, as noble Lords know, the Data (Use and Access) Bill includes a range of safeguards relating to solely automated decision-making with legal and significant effects on individuals. I reassure noble Lords that the Government’s plan to make work pay makes it clear that workers’ interests will need to inform the digital transformation happening in the workplace. Our approach is to protect good jobs, ensure good future jobs, and ensure that rights and protections keep pace with technological change.
The Government are committed to working with trade unions, employers, workers and experts to examine what AI and new technologies mean for work, jobs and skills. We will promote best practice in safeguarding against the invasion of privacy through surveillance technology, spyware and discriminatory algorithmic decision-making. The plan’s proposals regarding the use of AI and monitoring technology in the workplace were not included in the Employment Rights Bill to allow time for the full suite of options to be considered with proper consultation, given the novel nature of AI-enabled technology. However, I assure the noble Lord, Lord Clement-Jones, that the Institute for the Future of Work will be welcome to make an input into that piece of work and the consultation that is going forward. I reassure the noble Baroness, Lady Bennett, and all noble Lords that this is an area that the Government are actively looking into, and we will consult on proposals in the make work play plan in due course.