Amanda Claypool, seeking employment at a fast-food eatery in Asheville, North Carolina, encountered an unforeseen and vexing challenge in early June: malfunctioning chatbot recruiters.
Here are a few instances: The McDonald’s chatbot recruiter named “Olivia” initially approved Claypool for an in-person interview, but technical glitches prevented the scheduling of the meeting. A Wendy’s bot managed to arrange an in-person interview, yet the position it scheduled her for was unsuitable. Furthermore, a Hardees chatbot directed her to interview with an absent store manager, a far cry from a seamless recruitment approach.
Claypool shared with The Opinist, “I arrived at Hardees and their reaction was quite surprised. The restaurant staff had no clue how to proceed or assist me.” She ultimately secured a job elsewhere, stating, “It seemed needlessly complicated.” (There was no response to comment requests from McDonald’s and Hardees. A Wendy’s representative informed Forbes that the bot enhances “hiring efficiencies” and emphasized the company’s commitment to innovation.)
HR chatbots, similar to those encountered by Claypool, are progressively gaining traction in sectors such as healthcare, retail, and restaurants. These chatbots serve to sift through unqualified candidates and arrange interviews for potential suitable candidates. Noteworthy brands such as McDonald’s, Wendy’s, CVS Health, and Lowes employ “Olivia,” a chatbot developed by Paradox, a $1.5 billion AI startup based in Arizona. Another example is “Mya,” an AI chatbot from a San Francisco startup of the same name, which finds utility in companies like L’Oreal. (Paradox did not respond to inquiries regarding Claypool’s experience.)
Unlike more sophisticated conversational chatbots like ChatGPT, the majority of hiring chatbots lack a high degree of complexity. Their main application has been screening candidates for high-volume job positions such as cashiers, warehouse associates, and customer service personnel. These bots utilize straightforward queries such as “Are you proficient with a forklift?” or “Can you work on weekends?” Nonetheless, as Claypool’s situation illustrated, these chatbots can be plagued with glitches, and recourse to human assistance is not always available. Moreover, the rigid nature of the responses required by many of these bots could lead to automatic rejection for qualified individuals who might not respond in a manner favored by the chatbot.
This predicament poses potential issues for individuals with disabilities, those lacking proficiency in English, and older job seekers, according to experts. Aaron Konopasky, Senior Attorney Advisor at the U.S. Equal Employment Opportunity Commission (EEOC), expresses concerns that chatbots like Olivia and Mya may not offer suitable alternatives or job role accommodations for people with disabilities or medical conditions. He elaborates, “A conversation with a human naturally allows room for discussing reasonable accommodations. However, if the chatbot is overly rigid and the individual requires exemptions, the chatbot may fail to provide that opportunity.”
“It’s akin to how Netflix suggests movies based on your preferences for other films.”
Jeremy Schiff, CEO and founder of RecruitBot.
Another issue of concern is discrimination. Ingrained biases present in the data employed to train AI can embed prejudice and discriminatory elements into the tools that utilize it. Pauline Kim, an expert in employment and labor law at Washington University, specializing in AI’s role in hiring, pointed out that if a chatbot assesses factors such as response time, grammar usage, or sentence complexity, it raises apprehensions about potential bias. Detecting such bias becomes challenging, especially when companies lack transparency regarding the reasons for candidate rejections.
Recent legislative measures have been introduced by governmental authorities to oversee and regulate the use of automation in hiring procedures. New York City, for instance, enacted a law in early July mandating employers who deploy automated tools such as resume scanners and chatbot interviews to conduct audits for gender and racial bias. In a similar vein, Illinois passed a law in 2020 that necessitates employers employing AI to analyze video interviews to inform applicants and acquire their consent.
However, for enterprises aiming to curtail recruitment expenses, the utilization of AI screening agents emerges as an evident alternative. Matthew Scherer, a senior policy counsel specializing in workers’ rights and technology at the Center for Democracy and Technology, noted that human resources departments are often among the first areas to witness workforce reductions. He elaborated, “Human resources has traditionally been a cost center for companies, not a revenue generator. Chatbots offer a logical initial step in alleviating some of the recruiters’ burdens.”
This perspective underlies the rationale driving Sense HQ, which furnishes entities like Sears, Dell, and Sony with AI-powered chatbots utilizing text messaging for applicant screening. These chatbots assist recruiters in navigating through extensive applicant pools. According to co-founder Alex Rosen, the platform has already engaged around 10 million job seekers, thereby broadening the scope of potential candidates.
“Our initial motive for building a chatbot was to enable recruiters to interact with a broader range of candidates than they could manage independently,” Rosen elucidated, while emphasizing a crucial caveat: “We firmly believe that AI shouldn’t autonomously make hiring decisions. That’s where the risk arises. In our view, it hasn’t reached that capability yet.”
RecruitBot is pioneering the infusion of AI into hiring by employing machine learning to sift through a vast database of 600 million job applicants sourced from publicly accessible data and job platforms. The primary objective is to aid organizations in identifying job candidates akin to their existing employees. Jeremy Schiff, CEO and founder, likened the approach to Netflix’s movie recommendations based on viewers’ preferences. However, this approach also raises concerns about bias; perpetuating a homogeneous workforce has inherent pitfalls. Amazon, for instance, removed its machine learning-driven resume tracking system in 2018 due to gender discrimination, as the training data predominantly consisted of male resumes.
“Using chatbots as an initial measure is a highly logical approach to alleviate some of the workload on recruiters.”
Matthew Scherer, senior policy counsel at the Center for Democracy and Technology.
Urmila Janardan, a policy analyst at Upturn, a nonprofit organization dedicated to studying the impact of technologies on people’s opportunities, highlighted that certain companies have also embraced personality assessments as a means to filter out candidates. Notably, these candidate-screening questions might be entirely unrelated to the job requirements. “In fact, you might face the possibility of job rejection based on inquiries about personality traits like gratitude,” Janardan emphasized.
For Rick Gned, a part-time artist and writer, a personality quiz became part of the chatbot-driven interview process he underwent for an hourly-wage shelf-stacking position at the Australian supermarket chain, Woolworths. Developed by the AI recruitment firm Sapia AI (formerly known as PredictiveHire), the chatbot prompted him to furnish 50- to 150-word responses for five questions. Subsequently, his answers were analyzed to identify qualities and competencies aligning with the recruiters’ preferences. Based on this analysis, the chatbot determined that Gned “adapts well to change” and displays a “big-picture focus that occasionally leads him to overlook details.” Consequently, he advanced to the next stage of the interview. Although Sapia AI doesn’t impose time limits on applicant responses, the system evaluates sentence structure, readability, and the complexity of words employed in the text-based replies. Barb Hyman, CEO and co-founder of Sapia AI, elaborated on these parameters in an email communication.
Gned expressed his feelings to Forbes, describing the entire experience as dehumanizing and unsettling. He further conveyed his concern, stating, “While I belong to a demographic that isn’t severely impacted, I am apprehensive for individuals from minority backgrounds, who predominantly constitute the lower-income labor segment.”
Meanwhile, another job seeker, who chose to remain anonymous for candidness, found a silver lining in conversing with a chatbot. Amidst submitting numerous job applications, he often receives no response, making the bot’s confirmation of application receipt at least a positive aspect. He shared, “In many respects, it did provide a boost to my morale. However, if I were required to engage in this (text-based chatbot interaction) for every job application, it would undoubtedly become a cumbersome ordeal.”