We have highlighted six key points organisations must consider before implementing algorithms for hiring purposes.Bias and discrimination are a problem in human decision-making, so it is a problem in AI decision makingSo, you must assess whether AI is a necessary and proportionate solution to a problem before you start processing. This assessment should form part of your data protection impact assessment. We have written about what you need to consider when undertaking data protection impact assessments for AI in our guidance on AI and data protection.It is hard to build fairness into an algorithmUK based organisations also need to remember there is no guarantee that an algorithm, designed to meet US standards, will meet UK fairness standards.The advancement of big data and machine learning algorithms is making it harder to detect bias and discriminationThis is an area where best practice and technical approaches continue to develop. You should monitor changes and invest time and resources to ensure you continue to follow best practice and your staff remain appropriately trained.You must consider data protection law AND equalities law when developing AI systems.In several ways, data protection addresses unjust discrimination:Under the fairness principle AI systems must process personal data in ways an individual would reasonably expect.The fairness principle requires any adverse impact on individuals to be justified.The law provides aims to protect individuals’ rights and freedoms with regard to the processing of their personal data. This includes the right to privacy but also the right to non-discrimination.The law states businesses must use appropriate technical and organisational measures to prevent discrimination when processing personal data for profiling and automated decision-making.Organisations must undertake a data protection impact assessment when processing data in this way and ensure they build in data protection by design. These accountability mechanisms force organisations to consider how their processing might infringe on people’s rights and freedoms, including through discrimination and bias.So, although both address unjust discrimination, organisations must consider their obligations under both laws separately. Compliance with one will not guarantee compliance with the other.Using solely automated decisions for private sector hiring purposes is likely to be illegal under the GDPRSolely automated decision-making that has a legal or similarly significant effect is illegal under the General Data Protection Regulation (GDPR). There are three exemptions to this:you have had explicit consent from the individual,the decision is necessary to enter into a contract, orit is authorised by union or member state law.However, these are unlikely to be appropriate in the case of private sector hiring. This is because:consent is unlikely to be freely given due to the imbalance of power between the employer and the job candidate,it is unlikely that any solely automated decision-making couldn’t be replaced with a human decision-making process, andthe exemption allowing authorisation by union or member state law is not applicable to private business.Organisations should therefore consider how they can bring a human element into an AI assisted decision-making process.Algorithms and automation can also be used to address the problems of bias and discriminationAlgorithms do not just impact society; society also impacts the use of algorithms.