Training Badge
Submitted by LonPM on


I am interviewing a candidate for a specialist position I have very little experience in i.e. software testing. I am a project manager, this is a position for a tester on my project. Can anyone suggestion how to approach this interview as I would not be able to interview the candidate on his technical skills as it is not my area of expertise. Also I'm relatively new to interviewing.


gdc2579's picture

 Hi Lonpm,

My suugestion is that you start with the job description and craft questions specific to that. Next, you could use something like the Interview Creation Tool to generate behavioral questions. If you are new to interviewing, the tool helps you think about appropriate responses and recognize the better answers. 

I have found it effective to create a panel of interviewers and add content experts who can help with more technical questions and give me feedback on the quality of the answers. Be sure that questions are agreed on in advance and consistent for all applicants. This will be easier if you are interviewing for an outside hire , rather than an internal transfer, because you can involve your existing team members. 

Depending on the number of interviews, you will find that some stand out as really sharp, some are not qualified at all, and most are somewhere in the middle. Technical skils are important, but work ethic is equally important. In my experience, finding out what an applicant can do, or thinks you should do, in a given scenario is easy. Determining what they WILL do is less simple and takes a little experience, but the behavioral questions help with that. 

If you have two equally qualified applicants, go for the one who is more motivated (I know that's hard to measure). The bottom line is that you don't have to interview alone. You can pull in others to help you assess the candidate, until you become more comfortable with the technical requirements. Of course, you could simply solicit questions from this forum, but you will miss opportunities for good follow up questions during the interview. 

I hope this helps,


Mark's picture
Admin Role Badge

You're a licensed member.  Use the Interview Creation Tool for everything but the technical stuff, and then talk to 2-3 software testers for 3-4 technical questions each.  Use our "how to create simple behavioral interviewing questions" podcast to turn their raw work into finished products you can add to the technical questions of the interview the ICT creates for you.



maura's picture
Training Badge

I agree with Gary, you need someone on your interview panel that can assess their technical skills.

I'm a Software QA Manager, and I use the ICT for the behavioral part of the interview when screening testers and QA leads.  It's been really valuable.  You'll want to pay special attention to themes like sticking to your guns when time is against you, managing multiple projects, following through on details, and maintaining data accuracy. 

A good tester doesn't just find defects, they have a positive effect on the project by preventing defects and by getting defects fixed as early as possible in the lifecycle.  For that to happen, they need to be able to cut through the noise and clearly describe the problem and it's scope.  Analytical skills are key, as well a the ability to clearly and concisely decribe a problem and it's impact.

The best testers can see both the forest and the trees...they have extreme attention to detail combined with an understanding of the big picture.  To that end, a single typo, grammatical, or formatting error on a tester's resume is a huge red flag - those candidates shouldn't make it to the interview phase for a job where a healthy dose of perfectionism is key. 

Also, a couple of hints on QA-related certifications might be helpful:  The ISTQB Foundation Level is the most common tester certification these days - but it's relatively easy to get and just proves that they know the basics of testing.  There are higher levels of certifications available within ISTQB but I rarely see candidates that have them.  CSTE and CSQA are more costly and somewhat tougher to achieve.  The CSTE certification focuses more tightly on testing (similar to the ISTQB but a little deeper), whereas the CSQA's subject matter is the entire Software Development Lifecycle, and there is more focus on preventing defects versus finding them in test.

Hope this helps.  I'm happy to share more detail about my interview process if you like.


Mark's picture
Admin Role Badge

Maura and Gary-

Well done.


LonPM's picture
Training Badge

Maura, Gary & Mark

Thank you for your comments and suggestions, I'll use the ICT and get specific technical questions from our existing testers.

Kind Regards


LonPM's picture
Training Badge

 Hi Maura

I would greatly appreciate any words of wisdom you would be kind enough to share with me.


jcbphlip's picture

The reading group, "John Rawls and Modern Theories of Justice", will center on the analytic theories of law and social justice that derive from his famous and now seminal book. Eye shadow

_Ryan_'s picture

There are a lot of different categories of testers, most broadly manual vs automated.  An automated tester requires a specific skill set in the automated testing tools you'll be using.

Manual testing is different and requires testers to think on their toes more and be more aware of what they're doing and what the app should be doing.  We generally interview by asking them to explain specific processes in the applications they've worked with before and we pay attention to the details in their response.  Good testers will include every tiny step in a process, which in the end means they will also include this information in bug reports.  Bad testers make assumptions about the reader's knowledge and often leave out vital information.

We've also found that general technical skills are very important to testing.  Even if the job doesn't really require it, the more technical your tester is, generally the better they'll be in QA.


maura's picture
Training Badge

Hi there!

Happy New Year! I just got back from vacation and saw that you had requested more info on my interview process.  I hope I’m not too late!
In addition to the technical questions (which someone else should do for you) and the standard behavioral interview questions from the ICT, there are two pieces that I have found really valuable in splitting the field of applicants. The first is an additional behavioral interview question, and the second is a very basic skills test.  I'll put the skills test in a second post.
The question is a bit underhanded to do in an interview setting, and sometimes it takes the candidate aback, but the response can be VERY telling. It goes something like this… I tried to put it in a format that matches the ICT:
“Sometimes, despite our best efforts, products get released and defects are found by our customers. In our organization, even a small defect can affect hundreds of thousands of customers, or millions of dollars, in a single day. Tell me about the biggest defect that got past you on one of your projects, how the organization handled it, and what your role was over the days or weeks following its discovery.” (note, if they are currently working for a competitor you may want to say something about ‘without giving up any inside information’)
What behaviors to look for: Has this person been in a role where they see the effects of their efforts, or are they blind to how well (or poorly) the project turned out? Are they involved in prod fixes? Are they involved in “lessons learned” or retrospectives, to determine how to prevent such issues going forward? Are they forthright in acknowledging weaknesses, and open to improving, or are they defensive? Does it really, REALLY bother them that they missed something?  
o   Blind to whether they missed anything, won’t admit to missing anything, or blames others for the miss. If they say they’ve never missed anything, then either their projects were tiny and simple, or they are not being forthright. 
o   Cannot clearly explain the miss, why it happened, and what the effect was.
o   Not involved in cleaning up prod issues (maybe that’s the way the organization was structured, but their explanation might tell you what they thought of that)
o   Involved in cleaning up prod issue but doesn’t use that info to recommend future process improvements
o   Can clearly explain the miss, why it happened, what the effect was, even to someone unfamiliar with the product. Very important since this is a good predictor of how their bug reports will look when they are testing your project.
o   Involved in high pressure situations such as working prod issues – can take the heat, works with a sense of urgency, etc.
o   Not only supports the fix itself, but also learns from it and thinks about what could be done differently to prevent the miss, and recommends or implements process changes.
Can't wait to hear what my fellow QA managers on this site think of this...

maura's picture
Training Badge

(second post with more detail about my standard QA interview)

The skills test I use is adapted from an exercise I did during a training course by Cem Kaner way back in the 90's, so original credit should go to him for the idea.

This is super general, so that it can be used with fairly “young” testers to get an idea of their aptitude, without being specific about the nature of the app they would be testing. It is geared toward manual black box testers though. If you are hiring for an automated testing role, this exercise won’t be appropriate… but your dev person could probably come up with something similar in that case. Anyway I give them a sheet with this on it:

Candidate Name ________________________ 
Interview Date __________________
Given the following Requirements and Screen mock-up, perform a 5 minute brainstorming exercise to list which MANUAL tests you might perform to evaluate this functionality. (Note, automated tests should be considered out of scope for this exercise). After the 5 minutes is up, you will be asked to present your thoughts to the interview team.
1.       System should allow the user to enter two numbers.
2.       Each data entry field is 2 characters long.
3.       Once both numbers are entered, the system should present an accurate sum in the third field.
 (I can't get the screen mockup to format properly here but picture each of those short lines as a text box where it's unclear how many characters are allowed, and you get the overall idea)

Screen Mockup:
+   ____
=   ____ 

So, a few points about the exercise: 
There are two basic measurements of quality that testers should consider when formulating test plans: Meets Requirements vs Fit for Use.  I intentionally wrote pretty vague, incomplete requirements which leave a lot of room for interpretation. Bonus points to the candidate who comments on the meaning of the requirements or asks clarifying questions  - that’s a candidate you’ll want to snap up.
1.       A bad tester would describe 3 test cases, essentially restating each requirement with the words “Test that…” in front of them. Very basic, and leaves room for lots of bugs to sneak through. This tester is only looking at “Meets Requirements” and those requirements leave a lot to be desired. If they tested for Meets Requirements and didn’t comment on how bad the requirements are, that’s a big problem.
2.       A mediocre tester would describe what data could be entered and what the expected result should be for each of those test cases…This person is still focusing primarily on “Meets Requirements” but at least is approaching it in a detailed manner.
3.       A better tester would try to break the system, think about what a user might do, etc.  They would be thinking about the unwritten "Fit for Use" requirements. This tester would be trying negative numbers, decimals, alpha or special characters, zeroes and null values, sums over two digits, etc. 
So, here are some of the things you can learn about a candidate by looking at their results and having them walk through them with you. Even if the position isn’t for a black box tester, this exercise can give insight into how analytical they are.
1.       Did they ask any clarifying questions about the requirements or scope before the timer started? Bonus points if yes, but most applicants don’t.
2.       Did they show an orderly thought process?
3.       Did they describe their test plan clearly?
4.       Did they demonstrate requirements coverage?
5.       Did they think beyond the written requirements?
6.       Did they comment in any way about missing or vague requirements after time was up?
7.       Did they consider equivalence cases or pare down their tests after brainstorming?
8.       Did they attempt to prioritize or group their tests in any way?
9.       Did they demonstrate an overall “test to break” mentality? Are they looking for bugs or trying to prove that the system works? A good tester KNOWS the system is broken somewhere and challenges themselves to tear it apart and expose all weaknesses.
10.   Below is a bullet list of specific test cases they may have hit on. Note, in 5 mins they might not hit a large percentage of these, but these types of tests show that the candidate has at least basic training in black box testing, and the more cases they hit on the better. 
a.       Consider 3 digit sums or 1 digit sums
b.      Consider non-numeric data entry (letters, etc)
c.       Consider 1 digit data entry or the use of leading zeroes (ie does it let you put 1 or do you have to put 01). If 1 is allowed, is it left justified or right justified?
d.      Consider negative numbers or decimals
e.      What happens if you edit a number? Can you reset the system in any way to start over, or do you just overwrite whatever was there previously?
f.        Blank spaces or null values ( does 09+two blank spaces=09?)
g.       Special keys such as / \ ; * ( ) can crash a program… are they tried?
h.      Attemnput more than 2 numbers in either input field
i.         Actually make sure the sum is correct given the inputs
j.        Upper and lower limits (0 and 99) of each field
k.       Ask about navigation: how can the user move from field to field? Tab? Click? Enter key?
l.         Are any error messages displayed if “bad” data is entered? Does the error text make sense to the user? (ie, when it fails does it fail gracefully?)

My point in all of this is that, if they have the analytical skills to think like a tester, and can rattle off a bunch of the stuff above within 5 minutes during a high pressure situation like an interview, then the specifics of your app and your processes will likely come easy to them.  Your mileage may vary, but this exercise has been helpful in my screening process - it helped break the tie between two seemingly good applicants who aced all other parts of the interview.

LonPM's picture
Training Badge

Hi Maura

Aplogies for the late reply and thank you so much for the informative email, it was extremely helpful, I tried your suggestions and it looks like to helped find our perosn.

Much appreciatd your time and effort.


maura's picture
Training Badge

I'm glad - I hope you found a superstar!