Technology

‘Awkward and humiliating’: UK job hunters share frustration with AI interviews

Survey-led reporting suggests many UK applicants now meet automated screening or AI-assisted interviews—and a large share find the experience cold or embarrassing.

Newsorga deskPublished 8 min read
Visual for Newsorga: ‘Awkward and humiliating’: UK job hunters share frustration with AI interviews

UK job applicants are increasingly reporting that AI-assisted hiring stages feel impersonal, confusing, and in some cases humiliating. The complaint is not only about technology existing in recruitment. The complaint is about how it is used: minimal explanation, little feedback, no transparent appeal route, and decisions that appear final before a human conversation even begins.

What applicants are experiencing

The common workflow now includes automated CV ranking, asynchronous video interviews, chatbot pre-screens, and psychometric scoring engines. In theory, each stage improves speed and consistency. In practice, candidates say the process can feel one-way and opaque: they speak to a camera, submit timed answers, then receive generic rejection templates with no meaningful rationale.

That experience matters because modern job searches are already high-stress. When people cannot tell whether they were rejected for skills mismatch, keyword filtering, technical glitch, or model scoring behavior, trust in hiring fairness erodes quickly.

Why companies keep deploying these systems

Employer incentives are clear. Large firms can receive hundreds or thousands of applications per role. Hiring teams under cost pressure want triage tools that reduce manual screening time and create comparable scores across locations. Vendors market this as objective and scalable.

But speed and objectivity are not the same thing. A process can be fast and still biased, especially if model training data reflects past hiring patterns that already excluded certain groups.

The fairness and legal risk layer

Bias in AI recruitment can appear through proxy variables: language style, accent interpretation, prior-company signals, or CV formatting conventions correlated with class and educational background. If not audited, systems can reproduce historical exclusion while looking neutral on the surface.

In the UK context, this intersects with equality law, data protection obligations, and principles around explainability in significant automated decision pathways. Employers that cannot explain model-assisted decisions in plain language may face rising legal and reputational risk.

What good governance looks like

Responsible deployment should include at least five controls:

  1. Advance disclosure to applicants that AI-assisted screening is being used.
  2. Human-in-the-loop review for borderline or high-impact rejections.
  3. Regular subgroup bias audits and documented remediation.
  4. Clear retention/deletion policy for recorded interview data.
  5. Practical appeal channel where candidates can request reconsideration.

Without these controls, organizations risk short-term efficiency gains at the expense of long-term hiring quality and trust.

Why this can hurt employers too

A poor candidate experience is not only an ethics issue; it is a talent risk. Strong applicants with options may drop out early if they perceive the process as dehumanizing or unreliable. Over time, that can narrow talent pools and damage employer brand, especially in competitive professional segments.

There is also model drift risk: labor-market language changes, role requirements evolve, and old training patterns may stop reflecting current performance predictors. Static systems can silently degrade if not recalibrated.

Practical advice for applicants

Candidates cannot control the system design, but they can reduce avoidable friction:

  • Confirm in advance whether interviews are recorded and machine-scored.
  • Prepare concise STAR-structured responses for timed prompts.
  • Test audio/video setup before submission.
  • Keep copies of job ads and submitted materials.
  • Request clarification where process rights are available.

None of this guarantees success, but it lowers the chance that technical noise is misread as low capability.

What to watch next

The next phase of this story will be regulatory and operational: whether employers move from "AI-first screening" to "AI-assisted plus accountable human review." Watch for published audit standards, procurement changes in HR tech contracts, and court or tribunal outcomes where algorithmic hiring is challenged.

Bottom line

AI in recruitment is no longer experimental; it is operational. The central policy question is now accountability: can employers prove that faster hiring tools are also fair, explainable, and appealable? If they cannot, public anger and regulatory pressure will continue rising.

An additional reality for 2026 is candidate behavior: frustrated applicants increasingly share rejection experiences publicly within 24-48 hours. That compresses reputational risk timelines for employers and pushes HR teams to improve process clarity before disputes escalate online.

In practical terms, organizations that combine automation with auditable human review are likely to outperform those relying on opaque scoring alone. Over the next 12 months, hiring trust may become a competitive advantage, not just a compliance checkbox.

Primary source reporting: https://www.theguardian.com/technology/2026/may/01/uk-job-hunters-frustration-ai-interviews

Reference & further reading

Newsorga stories are written for context; these links point to reporting, data, or official sources worth opening next.