4 minute read

Hiring the right software engineers is one of the highest leverage decisions an engineering organization can make. In the book Software Engineering at Google, Google emphasizes that software engineering is not just about writing code — it is about building maintainable systems, collaborating effectively, scaling engineering practices, and sustaining long-term velocity. A strong engineer improves not only the codebase, but also the quality of discussions, system design decisions, testing culture, operational stability, and mentorship within the team. Conversely, hiring the wrong engineer creates long-term costs that are often invisible at first: fragile systems, technical debt, poor maintainability, weak testing practices, and reduced team productivity. Interviews therefore should not be treated as a checklist exercise, but as a way to evaluate how someone will perform in actual engineering work.

What Not To Do During Interviews

One of the biggest mistakes companies make during interviews is asking shallow technology checklist questions. Questions such as “Have you used Spring Boot?”, “Do you know React?”, or “Have you worked with Kubernetes?” reveal almost nothing about actual engineering capability. A candidate may have used a technology for years without deeply understanding it, while another candidate may have learned it recently but understands the underlying engineering concepts far better.

Similarly, asking purely theoretical questions like:

  • What are the SOLID principles?
  • What is Dependency Injection?
  • What is DRY?

often tests memorization rather than engineering judgment. Many candidates can recite textbook definitions perfectly while failing to apply those principles in real systems.

Software engineering is ultimately about implementation trade-offs, maintainability, debugging, testing, and collaboration under constraints — not theory recitation.

Another questionable practice is over-reliance on online coding assessments and algorithm platforms such as LeetCode. While algorithms and data structures do matter for certain domains, most real-world engineering work does not involve solving obscure graph problems or reversing binary trees within 20 minutes under pressure.

Real engineers work with existing codebases, debug production issues, write tests, review pull requests, and improve maintainability. Excessive emphasis on puzzle-solving often filters for candidates who are good at interview preparation rather than candidates who are good at engineering.

Interviews should resemble actual work as closely as possible.

Another common mistake is conducting interviews in an adversarial manner, where interviewers attempt to “catch” candidates making mistakes instead of understanding how they think and collaborate. Good interviews should evaluate reasoning, communication, adaptability, and engineering maturity — not create artificial stress.

What Categories Should Be Tested

The first category should always focus on the actual languages, frameworks, and technologies used in the real project. If the project uses Java, Spring Boot, React, GraphQL, or Kubernetes, then the interview should naturally revolve around realistic scenarios involving those technologies.

Instead of asking abstract trivia questions, use examples based on actual engineering problems the team encounters.

One of the best approaches is to use a simplified subset of the real project codebase. Take a real function or service that contains meaningful complexity and ask the candidate to work with it.

The code could intentionally contain:

  • Poor naming
  • Duplicated logic
  • Tight coupling
  • Missing error handling
  • Performance problems
  • Hidden bugs

Ask the candidate how they would improve the code, refactor it, debug it, or redesign parts of it. This evaluates practical engineering ability far more effectively than theoretical questions.

It also reveals how candidates think about:

  • Readability
  • Maintainability
  • Scalability
  • Testing
  • Engineering trade-offs

Automated Testing Matters

Another highly important category is automated testing. Many engineers claim to value testing, but interviews rarely evaluate whether they can actually write good tests.

Give candidates a realistic piece of code and ask them to write unit tests for it. However, the goal should not be writing tests purely for coverage numbers.

Good engineers understand that tests should validate:

  • Behavior
  • Business logic
  • Edge cases
  • Failure scenarios

rather than implementation details.

Strong candidates usually identify meaningful assertions naturally. They also understand when mocking is appropriate and when excessive mocking makes tests brittle and meaningless.

Testing ability is often one of the strongest indicators of engineering maturity.

Test Adaptability and Learning Ability

Another category that is frequently overlooked is adaptability and learning capability.

Technology changes constantly, and engineers will inevitably encounter unfamiliar tools, frameworks, or architectures. Instead of only testing existing knowledge, introduce the candidate to something they have not used before and ask how they would approach learning and implementing it.

For example, ask questions such as:

  • What steps would you take to understand it?
  • How would you validate whether it is suitable?
  • What documentation or resources would you look for?
  • How would you prototype or test it?
  • What would you do if your initial approach failed?
  • How would you troubleshoot issues during implementation?

This evaluates problem-solving ability, resourcefulness, and engineering mindset far better than memorized answers.

Strong engineers are not defined by knowing everything already — they are defined by how effectively they can learn, adapt, debug, and deliver solutions when facing unfamiliar problems.

Final Thoughts

A good software engineering interview should mirror actual engineering work. It should evaluate how candidates think, communicate, debug, test, improve systems, and adapt to change.

The goal is not to find candidates who can memorize definitions or solve artificial puzzles under pressure.

The goal is to identify engineers who can contribute meaningfully to real systems, collaborate effectively with teams, and continuously improve both the codebase and the engineering culture around them.