Case interviews

Here’s Why We Do Case Interviews for Software Engineers

Years ago, we uncovered one of the most problematic elements of the Software Engineer hiring process: they’re not very realistic.

At Revelry, we use case interviews to create the best representation of what it’s like to perform successful work as a Software Engineer. Here’s why we think the other methods aren’t useful, and how we interview to add the best Revelers to the team.

Problem: Useless whiteboarding exercises

A company once came to us to hire a technical team in the interim, since they were having a hard time hiring new Software Engineers.

They put me through their technical interview – the one they had been giving to their prospective new employees.

I was asked to implement a red-black tree as a class in JavaScript, and then demonstrate how to balance the tree structure. Look, this is college Computer Science “Data Structures and Algorithms” class material. And I’m extremely rusty on it.

I pushed back, asking the interviewer just how often they were balancing red-black trees in their JavaScript code. The answer, of course, was never. So I asked for a real problem to solve, and this customer became a loyal client for years.

This is why we do case interviews, not whiteboard challenges. Whiteboard challenges test for things we don’t use day-to-day in our work, and they don’t test for all the skills we do use.

Also problematic: Other “hands-on coding exercises”

There are a few other common types of “hands-on coding exercises”. Here’s why we don’t think they’re helpful, either.

Problematic screening exercise: The online or take-home code test

For this test, you give the interviewee a set of programming problems that the candidate does on their own. Then they hand it in, like a college exam.

This type of test can’t show a candidate’s capacity for communication, their ability to ask good questions, or their ability to bridge between developer and non-developer worlds.

To give this test, you have to phrase the questions such that they know exactly what to do, since that’s the last communication they’ll receive from you before answering. That doesn’t leave any space for the discovery process.

Do you want to spend the next few years feeding your team exact test-like instructions?

Still problematic, but better: Working in your real codebase

Inviting your candidate to work in your real codebase is better in some ways. At least they have more access to the interviewers in this scenario.

But there are still problems. For one, you are requiring someone to answer you in an exact technical stack with which they might not be familiar. This isn’t necessarily representative of their skills, since good developers can learn a new stack very quickly.

I’d rather hire someone who is an excellent general problem solver and communicator, and a polyglot programmer, than someone who knows my exact stack but lacks those other qualities.

We like case interviews because they’re realistic

Case interviews are a format of interview that create a realistic scenario, allowing the candidate to apply the real skills they would need to be successful in our work.

A case interview tests not just programming skills, but communication, estimation, and risk management. Perhaps most importantly, it tests for the ability to translate a problem statement into a working technical solution. Problem statements and intended outcomes are normally articulated by a non-technical person, so this translation skill is so crucial in our work.

We interview for the job we have: helping people to solve their organizations’ problems, not giving computer science lectures.

Here’s what a case interview looks like

At Revelry, the case interview starts with a scenario. We have a few case scenarios which are based on actual past projects (with names changed to protect both the innocent and the guilty).

The interviewee is allowed to ask as many questions as they like, at any point.

And they should. In our work, we need to ask clarifying questions to make sure we’re solving the right problem and that the solution we’re proposing will really work.

We ask questions in return, starting with broad questions.

  • What does this client need?
  • What do you think the hardest part of this project would be?
  • How would you figure out what technology to use to solve this?

Next, we ask more specific questions.

  • What tables would we need in our database?
  • What kind of columns might each of those tables have?
  • How do they relate to each other?

Then, we program.

The interviewee shares their screen. We ask them to implement one specific feature of the solution. They can use any programming language and framework they want. We don’t ask them to compile or run the code, because setting up a new project with boilerplate and scaffolding takes too long. And because we trust that people can use a generator or read the documentation when they need to.

Since we don’t ask them to work in a real project directory, we also don’t focus on getting method signatures right from memory or even having perfect syntax. We focus on whether they have a solid grasp on the concepts of practical web development and whether they can faithfully translate a non-technical problem statement and turn it into a working technical solution.

This is one of the ways we’ve built one of the best teams in Software Engineering. We treat interviewees like they are already on the team, and then we see how they do.

We're building an AI-powered Product Operations Cloud, leveraging AI in almost every aspect of the software delivery lifecycle. Want to test drive it with us? Join the ProdOps party at ProdOps.ai.