@here I need a reviewer!
About half of candidate applications do not work. Reviewing them is a massive waste of your engineers’ time. In this blog post, we explain why it happens and propose how to get this time back.
TL;DR: about half of candidate applications for software engineer positions don’t work. Reviewing them is simply a waste of your engineers’ time. In this post, we explore why and propose how to get this time back.
Have you ever saw a recruiter in your company chat going like:
@here does anybody have the experience with Language X to review the challenge of a candidate?
It is not hard to see it for what it is. It is a cry for help.
What recruiters are looking for is a hiring decision, and they need an engineer to do it for them.
The hidden expenses of the engineering engagement
If everything goes well, one of your engineers will speak up and will accept the task of reviewing this code.
The process may look something like this:
Recruiter forwards an email with an attached archive containing the solution.
The engineer receives and cracks it open to try to figure out what it is.
They may have to install missing software on their laptops to open the challenge files. For example, you work in Ruby, but the challenge is in Java.
Once opened, they will then check if there are any tests that they can run.
If there are tests, they will read them first to check what exactly the candidate is testing. And whether the tests are reasonable and sensible in the first place.
The engineer may run those tests, and they may or may not work. That may get the engineer into fixing mood. Are tests broken or incorrect? Or are they failing because of a different environment? In both cases, a couple of hours will pass by.
The engineer will not be able to perform a correctness check. Not unless they created the testing infrastructure for this challenge in advance. Reading source code is not enough to verify it is correct. Thus, they will only check that there are no grave errors. And that it follows the corporate style and personal preferences. After all, that is what they do on every pull request.
After a couple of hours of reading, the engineer may get a perception of the code quality. Unfortunately, they still have no proof if it will work in production, or not. Even after all this time and effort invested.
Yet, the engineer will need to make a hiring decision to move on with their day, or whatever of it remains.
The decision costs
You do not want the review to be superficial—or worse, a coin toss—when hiring someone. You want to get the most signal out of it to get the best possible decision. Credible review of the code written by other people requires skill and experience. The engineers that can do that are usually the most tenured ones of your team (and likely the ones earning the most). Reviewing the solutions will consume the time of your most seasoned veterans. The very same engineers you rely on to build your product.
They will have to stop what they are doing and likely spend their whole day doing the submission review.
People are generally decent, and they will try to be fair. If the code they are reviewing has no tests, what will they do? They will try their best to see if a tainted first impression still hides a good developer beneath it. They will try to get the best out of the candidate solution. The better they do, the more time it will consume.
The worst-case scenario is that this goodwill—or high pressure to hire someone—will translate into a bad hire. And it will bring all the added costs of it spreading through the company and impacting your teams.
Should you worry
If you look closely and honestly, you can see that most of what your engineers are doing is toil. Repeated work, which brings little to no value. Highly inefficient use of your engineering time.
We ran hundreds of such classic interviewing loops. We discovered that reviewing about 40% of the code challenges was a waste of engineering time. Some were incomplete, and others did not fit the description of the problem. Sometimes they were a complete trainwreck.
Another 20% will not pass a thoughtful review process. And only the remaining 40% will be the pool out of which you can hire. Not to mention that distribution, naturally, will be heavily skewed towards junior-level developers. Less than 10% of your original pool of applicants will be senior-level people and above.
Around 60% of the submissions will yank your people away from their tasks only to end up in a direct rejection. They will impact your product development without any return on investment. Do this often enough, and your whole development pipeline will grind to a halt.
(Note: we processed about six hundred candidates when we calculated those percentages.)
What to do
If you repeat, again and again, things that leave you off in a worse place than before—stop doing them!
Stop making bad hiring decisions!
Only then can you move forward. Counterintuitively, the path forward is not complicated or obscure as it could seem. What you need to do is improve your filtering.
Stop burning time in candidates that have zero chance to go through. Here we are talking about the initial 60% of the challenges you get back. Every hour invested here is a wasted hour. Your senior people should focus on building and maintaining your product instead. On a side note, you are also saving them from the frustration of reviewing the lousy solutions. After all, the happier they are, the better your company is.
Define a set of rules and metrics that you expect from a solution. Then, communicate them to the candidates in all fairness. No unit tests are a no-go? So be it, but let the candidate know it in advance, so the expectations are clear. That not only makes it a fair game but also provides you with the tools to apply those rules.
What we did, and what worked for us
We applied a simple rule for our challenges: it has to work in production.
Immediate improvement was that we did not have to spend any time on the 60% of solutions that were not working. Some candidates did not reply, and some did not regard the test as engaging or even fair. Others insisted on the follow-up interview despite never shipping something that worked. We freed our engineers from reviewing all those submissions.
That alone resulted in more than a 2x decrease in time spent on reviews. We returned hundreds of hours of engineering time to the company by removing all this toil. For us, it resulted in happier engineers across the company. For candidates, it improved transparency. They now understood what to expect upfront and how well they are doing during the challenge. Overall, that increased our interview to offer rate by roughly 50%.
Trying it out (aka the marketing pitch)
This simple change yielded fantastic results for us. So we have built it as a product to help all the companies that want to hire smartly. Engineering time is an invaluable resource, so start saving it with AutoIterative. Reinvest it back into your company and focus on your product while we do the filtering for you. Then you can focus on interviewing the people who can deliver and meet the bar that you set.
Try the free, fully functional demo of the Autoiterative platform.