Even with training, de-biasing the hiring process is a challenge—we’re still human and humans are governed by subjectivity and subconscious pressures. What if our approach could reduce these factors?
In the past year-or-so, we have overhauled our engineering hiring process at Nav with a suggestion that made many of us fairly uncomfortable:
What if the people running our interviews didn’t weigh in on hiring decisions?
This went against all of my experience in recruitment. I have decades of experience in my field and I know how to run a good interview, so why shouldn’t I be able to surface my recommendations? Should I not trust my peers to be able to do the same?
Having used this process many times now, I can confidently say that we make better decisions when we reduce how much we need to trust individuals to make the right call. In a process where interviewers demonstrate the candidate’s qualifications rather than presenting their conclusions, we not only get better hiring decisions, but we also impose a constraint on our process which requires us to constantly improve its design and our engineers’ recruiting skills.
Performing the interviews.
Our series of interviews seem pretty standard from the candidates’ standpoint: we screen résumés, our recruiters kick off a discussion with an applicant, which is followed by some number of discussions with pairs of interviewers about design, coding style, experience, or whatever traits we’re specifically looking for in the role. Perhaps a little less common, our hiring managers only speak with the candidate toward the end, mostly to sell the team we’d like them to join. Where we’re considering them for multiple teams, we’ll sometimes have each engineering manager hold a short discussion about why they think their team would be a good fit.
As each interviewer goes through their sessions, we ask them to record facts and observations in Greenhouse, the recruiting platform we use. The best writeups will annotate simple assertions about whether a given entry indicates something positive, overtly positive, negative, or very negative—this will become important later. The template through which we collect these observations is structured by the traits which the interview seeks to demonstrate.
Two potentially surprising details about this process, though, are that we do not prime our interviewers with a level or title for the candidate, and we do not ask them for their conclusions about level or title. We just want the facts and an assessment about whether we should be excited to hire the candidate. The lack of priming about level can take awhile for our panels to get used to, since it can be somewhat difficult to prepare for an interview for a level which you’re uncertain about, but one of my peers describes her approach excellently as, “I’m going to ask you progressively harder questions until either you don’t know the answer, or I’ve gotten out of my depth and now you’re teaching me something.” A great conversation is structured to uncover depth rather than verify a specific level!
Air-gapping the decision.
Why all of this focus on assertions of observation without conclusion, then? Do we not trust our interviewers? We do trust them to know how to learn things about the candidate by wielding their professional experience. We also assume that they would most likely be able to make the right call themselves in most situations. We balance these things, however, with an acknowledgment that trust in fallible individuals creates unscalable single points of failure in a hiring process, and that even the most well-informed individual will have blindspots in their approach to getting to know new people. We can work around these biases by air-gapping the hiring decision: we decide on level and hiring through a panel of engineering leaders who have not met the candidate and therefore have had no ability to form assumptions about them based on their own socialization blindspots.
After the interview series is completed for a candidate, we convene a hiring decision panel which is composed of the recruiter, hiring manager, a facilitator, and two barometers. The facilitator and barometers are drawn from a pool of senior people in the organization who have been trained and calibrated for these tasks, excluding those who may have conducted the interviews. Before we arrive at this meeting, everyone is instructed to review all of the interview notes. The hiring manager forms a hypothesis about what level the candidate should be hired at, prepares a short pitch on why they want this candidate to join the company, and why that level makes sense.
The facilitator is the driver in the panel discussion. They first ask the hiring manager for their pitch as an introduction of the candidate (who is, of course, not present) to the barometers. After the level is proposed, the facilitator takes the barometers through a short series of yes/no questions about whether the notes about the candidate demonstrate that they possess the traits that we need for the proposed role level. To support this, it is critical that each interviewer has provided only assertions about evidence rather than their own assessment—if I say that the candidate demonstrated a senior technical competency in architectural design, the barometers would be forced to say that no, no the notes do not demonstrate anything, as they only have my assessment but no examples to judge. The positive/negative format we encourage for these notes assists the barometers in scanning through the notes to quickly locate specific points to back up their assessments.
The panel discussion ends with some discussion across the group about tradeoffs in the candidate’s strengths and weakenesses, whether the assessments had been made with any critically missing context, and finally a hire/no hire decision at the proposed level. At this point, there are four possible decisions to make: accept the panel’s recommendation, re-run the assessment at a lower level if the panel opted to not hire, conversely we can run it again at a higher level if the candidate seemed to clearly surpass the proposed level, or finally the hiring manager may appeal by requesting a new hiring panel if there were somehow major deficiencies in how the run-through occurred.
The hiring panel procedure has been instrumental for reducing the latent bias in our hiring process, but possibly even more importantly has added a constraint that requires our interviewers to think more critically about their approach to hiring. The detached nature of the process means that no amount of positive feeling about the candidate will get them hired, but instead that interviewers must build their discussion around what they can use to most clearly demonstrate the candidate’s qualifications.