Responsible AI in Recruitment guidance from the Department for Science, Innovation and Technology (DSIT)

Financial institutions use automated tools in their processes already and a wider range of AI-powered products are emerging quickly.

Financial institutions use automated tools in their processes already and a wider range of AI-powered products are emerging quickly. DSIT published responsible AI in recruitment guidelines on 25 March 2024. In this blog, I summarise key takeaways from the paper.


  • Adopting Artificial Intelligence (AI)‑enabled tools in HR and recruitment simplifies the existing processes and enables greater efficiency, scalability, and consistency in hiring process.
  • However, these technologies also pose risks. The risks include amplifying current biases, digital exclusion, and discriminatory advertising and targeting.
  • Consequently, there is a need for risk assessment, performance testing, and bias audit (along with impact assessment) to minimize the risk of harms from deploying AI systems in recruitment.

Assurance mechanisms for different phases of the use of AI in recruitment

Before procurement:

  • Organisations should consider a clear vision that outlines the desired purpose of the system. They should lay out what problem their organisation is trying to solve, and how using AI system can help to address this problem.
  • For this, it is important to clarify functionality (desired outputs of the system), and what outcomes do they want AI systems to produce, and clarify relevant assurance mechanisms like AI governance framework, and impact assessment accordingly.
  • Similarly, it is important to consider accessibility – ‘reasonable adjustments’ may be needed under the Equality Act and data protection compliance.
  • Assurance mechanisms during this phase include data protection impact assessments, wider impact assessments, and AI governance frameworks.

During procurement:

Suppliers should be able to evidence consideration of the accuracy and scientific validity of AI systems, along with appropriate risk identification and communication.

  • Suppliers should have appropriate assurance mechanisms in place, e.g. bias audit, testing the performance of models, assessing potential risks and having mitigations in place. Firms should clearly distinguish between acceptable, and unacceptable risks.
  • Deployer should request documentation about the model, such as standardised key facts about AI models from AI providers (‘model cards’), including details on model goals, and intended use, limitations of the model, training data, model performance, and other identified risks.

Before deployment:

  • Prior to deployment, it is recommended that organisations pilot the technology with potential users (employers, job seekers).
  • During this phase, considerations should include avoiding incorrect usage and assessing models against equalities outcomes. Organisations should be able to plan ‘reasonable adjustments’ to ensure that an applicant with a protected characteristic is not at a disadvantage due to AI systems.
  • Assurance mechanisms should include performance testing, employee training, impact assessments and ensuring transparency with users about AI use.

Live Operation:

  • Once an organisation has deployed an AI system, it should set up regular monitoring and evaluation to ensure that the system continues to perform as expected over time.
  • Assurance mechanisms include iterative bias audits, iterative performance testing, and a user feedback system. Feedback systems should include options for providing a detailed description of the issue (bugs, biases, etc.), report the severity of the issue, report whether the issue prevented an applicant from progressing, and be clearly signposted to users.


There is no standardised approach to AI assurance. However, the considerations outlined in DSIT’s guidance should be considered by all organisations seeking to procure or deploy AI systems in recruitment. AI assurance is an iterative process that should be embedded throughout the businesses practices, to ensure that the systems are set up responsibly and minimise the risks posed by the used of AI systems in recruitment.  The annex section of the guidance discusses use cases where these risks might be prevalent, in detail.