Amid the high-speed hiring landscape, AI tools are being marketed as a swift, efficient, and impartial solution. AI’s role in recruitment has revolutionised how resumes are scrutinised and video interviews are conducted, among other things. However, lurking beneath this promise is a growing concern: algorithmic bias. As businesses persist in using AI to sift through applicants, a troubling trend has emerged, as these programmes tend to perpetuate human biases rather than eliminating them.
Large technology companies such as Amazon, LinkedIn and HireVue have all been questioned about the unintended effects of AI hiring tools. Their stories demonstrate the technological shortcomings and the systematic issues of creating fair and transparent algorithms for recruitment.
Table of Contents
Amazon’s Failed Experiment
Amazon’s case is a notable example of algorithmic bias in hiring. The company developed an AI recruitment engine in 2014 to automate resume screening, training it on historical hiring data, which reflected gender disparities in tech. Over time, the system penalised resumes with terms like ” women chess club captain” or mentions of all-women colleges, indicating a preference for male candidates. Despite attempts to adjust the model, the bias persisted, leading to the project’s abandonment in 2018.
This case underscores a key issue: AI algorithms inherit biases from their training data. If historical hiring practices were discriminatory, the AI will replicate this bias unless it is thoroughly audited and corrected.
The Facial Analysis Controversy and HireVue
HireVue, an AI-powered online interviewing service, uses facial recognition, voice tone, and word choice in recorded interviews. Marketed to reduce human bias and automate hiring, it raised concerns among privacy activists and ethicists.
The Electronic Privacy Information Centre (EPIC) filed a complaint with the Federal Trade Commission (FTC) against HireVue, arguing its facial recognition technology constitutes unfair and deceptive trade. Critics highlighted that analysing facial expressions and speech can discriminate against neurodivergent individuals, non-majority cultures, or those with disabilities.
Amid growing pressure, HireVue announced changes to its use of facial analysis in evaluations, effective early 2021. This controversy sparked an industry debate on the ethical limits of AI in candidate evaluation.
The Algorithmic Amplification of Inequality and LinkedIn
The largest professional networking site in the world, LinkedIn, has faced issues with trying to ensure fairness in its algorithmic tools as well. A 2021 study conducted by MIT found that the algorithm of recommending jobs on LinkedIn had a tendency to reproduce gaps in gender. Men got exposed to tech jobs that paid well, whereas women were advised to take up lower-paying administrative work.
Since then, LinkedIn has promised to audit its algorithms more often and to make diversity and inclusion central to their machine learning models. Nevertheless, the case shows the fact that even the best-intentioned platforms can end up strengthening the status quo.
The Mechanics of Bias: The Insidious Form of Bias in AI
AI hiring tools tend to be biased in three main ways:
- Training Data Bias: When the training data is input into the system, it already has discrimination; the AI will tend to repeat it.
- Feature Selection Bias: The features used to make choices may accidentally be correlated to race, gender, or socioeconomic status.
- Feedback Loop Bias: Feedback loop bias involves the early biased results getting passed on to future recommendations which can further exacerbate the issue.
The Fantasy of Objectivity
An illusion of objectivity is one of the most hazardous myths about AI in the recruitment field. AI is usually marketed as neutral and data-driven, yet algorithms are human-made. Human choices, assumptions, and priorities influence every step: data collection, model selection, and so on.
In the absence of transparency, the auditability and fairness of such systems cannot be easily questioned. Most AI vendors treat their algorithms as proprietary, so neither the employers nor the candidates know how the decisions are reached.
Calls and Reform, and Regulatory Scrutiny
Regulators and governments are now beginning to pay attention. In 2023, the U.S. Equal Employment Opportunity Commission (EEOC) initiated a programme to check AI-based hiring tools’ compliance with anti-discrimination regulations. New York City enacted a law that compelled firms to audit AI hiring tools against bias and notify candidates when they apply AI hiring tools.
The proposed AI Act of the European Union goes even further and labels the use of AI in employment as high risk and places it under stringent supervision. Such regulatory actions indicate that a time of unregulated use of AI in hiring is at its end.
The Humanised Hiring Algorithm
Recruitment based on AI is not going to disappear. In fact, it is bound to increase as firms strive to expand their operations and reduce expenditures. However, in the absence of close supervision, these tools can enshrine the inequalities they are supposed to address.
The future of hiring technology depends not only on innovation but, as the cases of Amazon, HireVue, and LinkedIn show, on responsibility as well. Algorithm bias might be a sub-audible crisis in the present day, but it is a loud and widespread issue for those who are left behind by imperfect systems. Business leaders, technologists, and policymakers need to unite and create not just smart but also AI systems.