Hired by a robot: What it’s like to have an AI interview
Choosing to do a job interview during the first week back at work after the Christmas break may not have been the wisest decision. My brain felt like wet cake, sodden and spongey, disintegrating into a pile of mush as I tried to focus on the screen. I had mindlessly chomped my way through a comically oversized chocolate coin, leaving me feeling mildly sick. In a normal job interview, I might reference these moments, infused with sardonic charm to break the ice. But today, my interviewer can’t relate to being a bit sluggish post-Twixmas. He doesn’t know the discomfort of overeating and then pounding the cut-price advent calendar chocolate. And it’s not just because he’s a young, fresh-faced twenty-something who hasn’t been adding Baileys to his morning coffee. My rapport-building jokes won’t cut it because my interviewer isn’t a real person.
The “man” deciding my fate, dubbed “Carl,” is an AI interface designed to look and sound like a human. Created by HR-tech firm TestGorilla, he is a sophisticated checklist of keywords and phrases, fronted by an avatar. The use of AI in recruitment is rising, with a tripling in the past year in the UK alone. Three in 10 UK employers are implementing AI, and 43% of large companies use it to interview candidates. TestGorilla has signed up close to 800 organizations to its conversational AI interview tool.
Despite the role being something I have no experience in, I felt nervous. The butterflies were partly due to the fact that I had never done the thing they were asking about. I decided to have fun and dream up a marketing campaign for a clothing line aimed at dachshunds. But my anxiety stems from the fact that Carl is not a real person. I’ve always relied on my people skills in interviews, and even if I fudge an answer, I’m confident in my “soft” skills, like emotional intelligence and making people smile.
I feed off other people’s energy in a pressurised situation, but online interviews make this harder. When you speak passionately to a human, there’s often a mirroring that takes place, a positive feedback loop. Carl, however, has an unchanging half-smile and dead-behind-the-eyes expression, leaving me flat and cold. I can’t muster even the slightest sparkle.
This kind of interview might see the end of the “personality hire.” How do we guarantee we’re not populating an organization with highly skilled sociopaths? Carl sometimes does me a solid, double-checking my answers and holding them against a framework. But it feels more like success lies in gaming an algorithm than building an authentic connection with my potential boss.
AI’s grip on recruitment is tightening. Job applications have surged by 239% since ChatGPT’s launch, with the average job opening receiving 242 applications. The number of applications making it to the hire stage has dropped by 75%, and 54% of recruiters review only half or fewer of the applications. Daniel Chait, CEO of Greenhouse, calls it an “AI Doom Loop.”
The use of AI has eroded trust, with 40% of job hunters reporting decreased trust in hiring, and 39% blaming AI. There have been allegations of built-in bias, with HR software company Workday facing a discrimination lawsuit for systematically screening out applications from workers over 40, racial minorities, and people with disabilities.
Hiring managers are also concerned about fraudulent activity, with 72% becoming more wary. A third of candidates admitted to using AI to conceal their appearance, 30% of hiring managers have caught candidates reading AI-generated responses, and 17% have caught candidates using deepfakes.
While I was tempted to use ChatGPT to ace the test, TestGorilla warns against it. As technology advances, we might end up with AI interviewers interacting with AI candidates. Chait believes we’ll need identity verification in the hiring process to ensure we’re not just interviewing AI.
Despite the concerns, there are positives to employing AI in recruitment. AI can be audited and corrected for bias, and it makes sense to automate assessments. Chait points out that bias can be corrected systematically, and automated assessments can work nights and weekends, be scaled, and work in any language.
Candidates need to prepare for early screening by a sophisticated bot and clarify the rules around AI when applying. Employers should remember that behind each application is a human being desperate for a job, not just a collection of algorithms and credentials. Chait cautions, "They’re a full, three-dimensional human being."