Meta to enable AI use in job interviews

The technology firm Meta is to allow some coding job candidates to use an artificial intelligence (AI) assistant during the interview process, according to internal Meta communications seen and reported by 404 Media (29 July).

To prepare for this, leaders of Meta reportedly asked employees to volunteer for a “mock AI-enabled interview”.

The internal Meta post, according to 404 Media, reads: “Meta is developing a new type of coding interview in which candidates have access to an AI assistant. This is more representative of the developer environment that our future employees will work in, and also makes LLM-based cheating less effective.”

Meta trialling the use of AI in the interview process is both good and bad, according to Raoul-Gabriel Urma, founder and group CEO of education technology company Cambridge Spark.

Urma said: “Meta is an industry leader; there will be countless companies that will view the company’s decision to allow AI as a sort of green light. There’s the risk that companies might jump the gun and rush to allow AI without any real policies or safeguards.”He added: “For too long, there’s been an unfair stigma around candidates using AI, like it’s cheating or dishonest. Employers should want to understand how candidates use AI. It’s one of the most sought-after skills in the modern workforce.”

Urma added, the key is transparency, balance and establishing clearly defined AI hiring policies. “If candidates are allowed to use AI during the application process, employers should have a clear policy that requires them to disclose its use. But, just as important, employers must make clear that using AI won’t count against the applicant.

Khyati Sundaram, CEO of recruitment software Applied, said: “More companies will follow in the footsteps of early adopters like Meta and Anthropic. However, it won’t be just because they are copying bigger names, but because it makes sense for the roles they are hiring for.

“Ultimately, it’s up to employers to decide whether they want to test for AI literacy, and that will depend on their stance and the job at hand. The most important thing is that policies around AI use are clearly defined, and that hiring decisions are made fairly and objectively – based on candidates’ role-relevant skills.

“It’s about putting their submission in the right context and ensuring a fair comparison between AI-assisted and non-AI work. Another option is to give candidates two tasks: one where AI is allowed, and one where it’s not. That way, hiring managers can assess both core skills and AI fluency.”

Jonathan Kestenbaum, managing director at talent solution firm AMS said: “Younger generations tend to be more comfortable using AI tools, which could give them an edge. One way to level the playing field is to give candidates options; maybe they can choose to be evaluated in an AI-assisted process or go through a more traditional route. Flexibility will be key to ensuring fairness.

“I don’t see any immediate red flags, but every major move like this has ripple effects. It depends on how it’s implemented and how transparent they are about the role AI is playing. If it’s done thoughtfully, others may follow. If it’s heavy-handed or opaque, it could backfire and damage trust.”

en_US