I Gave Myself 5 Feedbacks on My Amazon Data Science Interview
I took a phone screening interview from Amazon Web Service a couple of days ago and it was for a Data Scientist role. After 2 days, I was rejected with no feedback. I realized most ds related interviews don’t give us feedback, which get us no way to grow. So I’d like to give myself a quick feedback for this interview, hope it helps you prepare and perform better during your AWS data science interview.
The interview was taken up by three parts:
- 0 - 10 min: introduce myself + why this role + any question
- 10 - 30 min: SQL * 2 + Python * 1, both are data manipulations
- 30 - 55min: machine learning
- 55 - 60 min: any question
Here I made myself a feedback list:
- Answer ‘any questions’ problems on various aspects to show your preparations and interests: In this interview, I only focused on what the team is doing and what are some use cases the team is facing. But I could have asked 1 question each on different aspects: team phase, use cases, career growth and mentor, company culture, interviewer’s working experience, etc.
- The coding question itself is easy money, but expect yourself to explain the logic and result correctly, in only 1 minute (30s think + 30s explain) and write up the correct answer FAST: during the interview, I felt the interviewer wanted to test me if I'm proficient in writing codes, with both SQL and Python. The goal is neither for challenge nor perfection, it is about moving fast and taking corrective actions based on the interviewer’s feedback on your answers.
- Be more concise than what you think is concise enough and save what else you know for some follow-up questions: During the machine learning portion, I thought I explained the terms and logic well enough, but then the interviewer gave me a kind reminder that we only had 20 minutes left and there was still a lot to be asked.
- Make sure everything said is on point: I had a hard time getting what the interview expected me to say when I was asked how will you give weights to each model for the final ensemble model. I gave 2 ways but the interview just said ‘ok…’, which means I was just hitting around the answer.
- Be a good listener, and always give an answer after you make sure you fully understand the reason behind what the interviewer asked: When the interviewer hinted me to use another model performance metric, she asked ‘is it possible to have a high precision/recall but the true negative rate is every low?’. I dug into the question itself, which totally derived from the truth that the interviewer wanted me to consider: class imbalance. Even though I did propose using ROC-AUC for class imbalance, I should have rephrased the hint as ‘how do you solve class imbalance?’.
I hope everybody can pass all interviews, and hope this post can make this come true:
P(B pass|A fail) > P(B pass)