Bots in the HR Department: Recruitment in the Age of Generative AI

March 13, 2024

Article by: Chira Perla

Previously printed in the LexisNexis Labour Notes Newsletter.

Although artificial intelligence (AI) tools have been available to human resources (HR) departments for many years, the November 2022 release of OpenAI’s ChatGPT prompted HR professionals and their legal advisors to take a fresh look at how generative AI chatbots can support and improve HR work, including recruitment.

What makes this new class of AI unique is its ease of use and accessibility: the conversational prompts are very intuitive and, unlike most AI products that support recruitment (including automated hiring tools like resume screeners or applicant tracking systems), generative AI chatbots and related third-party plug-ins are either free or involve a nominal cost to use.

For HR professionals engaged in recruitment, the arrival of these tools has resulted in some quick wins, including the ability to efficiently and effectively generate Boolean searches to pipeline talent, craft or improve job descriptions, generate interview questions, and personalize communications with candidates.

But there are also significant potential pitfalls to using generative AI chatbots in recruitment. While exact risks will vary from jurisdiction to jurisdiction, HR professionals and their legal advisors should be alive to the following:

Accuracy and completeness: There are many examples of generative AI chatbots providing incomplete, inaccurate or even made-up information. Accordingly, HR professionals should not exclusively rely on chatbot-generated content in any step of the recruitment process (from pipelining to hiring).

Privacy: Information entered in prompts may become part of a generative AI chatbot’s training data set. This means that sensitive or confidential information used in prompts could be revealed to other users or third parties. HR professionals should thus be wary of entering information about their organization or candidates into chatbots. They should ensure that the collection, use and disclosure of candidates’ personal information complies with applicable privacy laws, particularly around consent. For example, candidate consent would likely be required to input application materials into a chatbot for screening, or to use an AI plugin to crawl social media accounts to conduct a digital audit.

Human rights: In Canada, human rights laws prohibit discrimination in employment based on certain protected grounds, including sex, age, race and religion. The data sets used to train generative AI chatbots are incredibly large and often opaque, making them difficult (if not impossible) to evaluate for bias. Bias in training data can permeate results. Accordingly, the use of chatbots to screen candidates runs the risk of filtering them in a discriminatory way.

AI laws: AI regulation and legislation is rapidly changing, and any use of chatbots in recruitment requires close attention to evolving laws governing AI use. In Canada, the Artificial Intelligence and Data Act (AIDA) is proposed legislation that aims to set the foundation for the responsible design, development and deployment of AI systems.[1] According to recently proposed AIDA amendments, AI systems used to make employment-related decisions will be classed as high-impact, triggering the most onerous obligations under the legislation.[2] HR professionals should also be attuned to new and developing provincial laws – for example, s.12.1 of Quebec’s new Law 25[3] sets out employer obligations when making hiring decisions exclusively on the automated processing of personal information.


While generative AI chatbots tout a lot of advantages, it is risky for HR professionals to leave recruitment to these tools – particularly the pieces that involve candidates’ personal information or final decision-making.

These risks remain even when using third-party software that incorporates generative AI chatbots or AI more generally. Employers are ultimately responsible for the methods they use to recruit talent and so are unlikely to be shielded from liability should a vendor’s product run afoul of privacy, human rights or AI laws.

In all cases, HR professionals should understand the basics of how the technology they are using works and be assured that it complies with their employer’s internal policies and legal obligations.