by user0x1d on 6/28/22, 7:50 AM with 143 comments
Do you agree with those? Or is it just the case that the tests are designed to see if the person applying is actually really smart and interviewers want to work with really smart people, and those complaining about these types of questions are just not good enough?
Honest question and I’m not taking any side although I admit it I tried to phrase it more towards getting favourable FAANG responses
by romanhn on 6/28/22, 9:07 AM
It took me years to internalize this and get over the aversion to studying for these interviews. At the end of the day, many reasonably competent people could do the day-to-day work just fine. Interviews, however, are their own separate thing and without prep, it's quite unlikely to pass the interviews at these tech giants (yes, there are people who have done it, they are the minority). Might as well make peace with it. Personally, I think this is a hell of a feedback loop companies got themselves into, as they all, as far as I can tell, struggle with hiring senior talent, yet are unable to let go of these hazing processes.
I do reserve a certain amount of ire for companies/startups that copy these interview styles without having the same constraints or pipelines. Knocking out a great portion of candidates that would have likely done just fine on the job, because the hiring teams didn't bother to build an interviewee-friendly process boggles the mind, given how desperate many of these companies are to hire.
by throwaway9191aa on 6/28/22, 12:04 PM
My AWS interviewer (I got super lucky) asked me a really basic question about linked lists. Then expanded to a double linked list. Then we talked about inheritance. Only then did we attempt to find the nearest parent of two nodes in a tree. It was a great example of determining knowledge, and whether that knowledge can be applied to problem solving.
When I shadow interviews I usually hear folks ask candidates something like "find repeating k-clusters in a string" and then just silently wait for the code. Or even worse, design a filesystem. Then critique the candidate for not thinking of weird symbolic link ownership edge cases. Unsurprisingly, this is just what it is like to work at AWS :(
The question needs to have multiple solutions that you can discuss with the candidate. The question needs to have some follow up like "What if you were writing this on an embedded device? How would that change your solution?". These situations DO arise every day. Here is a simple problem, with annoying institutional requirements. Can you still come up with something? Or did you just memorize the algorithm?
by ggambetta on 6/28/22, 11:49 AM
Disclaimer 2: I interviewed with Google twice, and I was hired twice. This might bias me.
Are we talking about interviews by FAANG, or interviews by startups that cargo-cult FAANG? In my experience as a candidate (twice) and as an interviewer (O(100) times), I was never asked, or asked, the kind of Leetcode question people love to hate.
The questions involve a whiteboard, yes; and they involve reasoning about algorithms and writing code, yes; and I don't see anything wrong with that. I see people putting down whiteboard interviews and people almost proudly saying their job consists of gluing together fragments of StackOverflow code they don't fully understand, and that just won't cut it at a FAANG, so you need to know what kind of candidate you're dealing with.
The most complex my go-to question gets is very basic recursion and very simple caching. A surprising fraction of the candidates I interview, who always have a nice-looking CV and have gone through recruiter and phone screening, can't do basic recursion and basic caching. I'm not asking a trick question, you don't need to remember an obscure factorial formula to find the optimal solution, none of that crap.
Recursion and caching. Table stakes, IMO. You can get away with not grokking recursion and caching for some kind of role where you can copy-and-paste StackOverflow answers and random tutorials, but you probably won't do well as a SWE in a FAANG where you're handed a vague feature request and you're expected to deliver a feature that will be used by a billion people by the end of the quarter.
Not saying that every feature requires recursion, caching, and complexity analysis, but these kind of skills are a good indicator of whether you're the copy-and-paste variety or the build-something-new variety, and it's important to know which one the candidate is.
Now if you're interviewing for a startup whose product serves a small number of users and you're asked the tricky gotcha Leetcode questions for no good reason, sure, that's dumb. But don't hate the good use cases just because the cargo-culters get it wrong.
by pclmulqdq on 6/28/22, 11:24 AM
by wai1234 on 6/28/22, 10:20 AM
The notion that you figure out "if the person applying is actually really smart" is a 'just so story'.
by Youden on 6/28/22, 10:35 AM
The FAANGs give you these weird tests because they're not really checking your programming ability so much as your general problem-solving ability, because problem-solving is the biggest part of what you're being hired for, code is just how you write down your solutions.
by Grustaf on 6/28/22, 2:38 PM
As others have pointed out, the idea is to provide a context where you can judge the candidates knowledge, intelligence and ability to explain their thinking. Whiteboard tasks happen to be a good way to do that.
When the interviewer asked me to implement a depth first search algorithm on the board (although he didn't call it that), it most definitely wasn't because he thought that I would ever need to do anything like that. That's why they accepted me even though I didn't quite nail it. That's why I never worry about memorising algorithms, that's really not the point.
by shaftway on 6/28/22, 5:21 PM
None of this tests real-world skills. You never have to weave as tight as the test requires, going that slow makes it painfully difficult, and if you have any balance issues at that speed you should put your foot down, even if it's just for a quick touch. But they keep the test as-is. The thinking is that if you're capable of doing the hard stuff in the test it demonstrates that you've had enough time on the bike to figure the rest of it out.
The leet-code interview questions are the same. I don't expect you to have read the problem before, or to give me a perfect answer, because the interview isn't the job. But how you handle a question like that gives me insights into roughly how much experience you have and how comfortable you are with code. If you can handle that question, even if you wobble a bit, you'll probably be fine on the job.
by elijaht on 6/28/22, 11:38 AM
I would agree with comments here saying leetcode isn’t appropriate for super senior roles. I am not privy to what hiring at these levels look like though. That being said I think most people overestimate how senior or deserving they truly are- for even staff engineering positions there are enough reasonable candidates that I think a more standard interview works
I will also note that for all but the lowest level there is some component of system design in an interview loop. This I think is a better test for most roles
I don’t think leetcode is perfect and at smaller companies doing it instead of a more bespoke interview is lazy and suboptimal
by dave_sullivan on 6/28/22, 10:08 AM
But leet code questions are not that. And I don't appreciate the interviewer acting like I'm an idiot or lying because I don't do well at them. Just tell me about the job and let's talk about that. It's really not that complicated.
by the_biot on 6/28/22, 11:49 AM
by conjecTech on 6/28/22, 5:30 PM
For context: I've done 3-4 full job searches ranging from new grad to Staff+ that included FAANG companies, and received offers each time from some but not all of them. I've been fortunate to have other offers I preferred each time, though until recently this has meant accepting compensation below FAANG-levels.
One of the primary reasons I haven't taken a FAANG offer is because of the over-scoped nature of most of the work they offer. The positions at these companies often involve squeezing more profit out of existing successful business surfaces by fiddling with the knobs. If you're working on a surface that produces $10M in revenue each year, and you can improve it by 10% each year, you can justify a good wage. That kind of straightforward investment is exactly what middle managers like.
However, jobs like that will seldom see you architecting or changing systems at a sufficient scale where these skills become relevant. There are undoubtedly exceptions at these companies. I'm not making a universal statement. But having watched the careers of people smarter than myself both inside and outside of FAANG, I've seen a considerable gap emerge in the technical abilities and accomplishments in favor of those at leaner companies.
I think this explains the experience of most engineers going through this. To be Jeff Dean, you need all of these skills. Because of this, OG engineers like him made it part of the recruiting rubric. But you are unlikely to become Jeff Dean by joining Google now. If that is your goal, my advice is to seek out companies with the highest ratio of users to engineers. A value of 1e6/1 is a good target. These places probably look like dumpster fires because of their scaling problems. But they need these skills and have no alternative than to let you work on the problems that require them. Make sure there are a couple of people there who have done it before that you can learn from and hold on for as long as you can.
by throwaway81523 on 6/28/22, 11:01 AM
by notacoward on 6/28/22, 11:53 AM
The interviewers did a pretty decent job within the constraints they were given, but it really was an almost complete waste of time for all of the reasons others mention. Even for a junior developer - who the process is designed for - it barely seemed to touch on the skills they'd actually need to succeed there or anywhere. Maybe it also weeds out impostors among those with no searchable record, but if that's the purpose then such an interview should only be done in those cases and skipped for those who have plenty of evidence that they can do the work.
The design interviews were more useful IMO, and less skippable, but those don't seem to be the topic here.
by xiphias2 on 6/28/22, 9:59 AM
by taf2 on 6/28/22, 12:12 PM
by nojito on 6/28/22, 9:35 AM
by marssaxman on 6/28/22, 12:37 PM
I've never really understood the hate. Either I've consistently had a different experience with interviews, or the people complaining have a really different idea about what their job is supposed to be.
by nunez on 6/28/22, 2:47 PM
The problem with whiteboarding (for me) is that it optimizes for people who are good at studying for and passing tests instead of people who are actually good for the job, which, ultimately, makes it a suboptimal determinant of how well the candidate will perform once they are hired.
I think that whiteboarding + delving into past experience is a great combination for screening. You can't bullshit experience.
by tyleo on 6/28/22, 11:16 AM
Edit: I’ve both worked at and interviewed others for Microsoft. On my team you could basically ask what you want. My sense was that bad questions were more a product of laziness on the interviewers part than corporate policy.
by ekleraki on 6/28/22, 7:27 PM
One of the interviews today was focused on the task that I will be working with, we started from a simple instance of the problem, and saw how we could improve upon it. The task revolved around querying and identifying entities. We worked through multiple instances to see how we can improve on the initial approach, or how to handle odd cases.
After that some leetcode was included.
The other two interviews were more on general technical and ml stuff.
I found this approach to better than other kinds of interviews I have gone through.
by lynndotpy on 6/28/22, 9:00 PM
The process involved a few automated algorithms questions, and then later, a whiteboard interview.
At the time, I was actively studying exams involving DS&A questions, so "find an algorithm that does X in O(Y) time" was a big chunk of what I was doing as a person.
If I wanted to pass now, I feel like I'd have to waste some time, say, remembering exactly how to implement a priority queue. That time could probably be better spent on learning other skills.
by blisterpeanuts on 6/28/22, 11:48 AM
Meta/Facebook has a hiring freeze right now, as do some units of Netflix and Amazon. Maybe it’s not a great time to be seeking a position in a FAANG anyway, except for a few elite positions. Feels like that ship has sailed.
by erulabs on 6/28/22, 6:17 PM
The interviewer loudly scoffed and pulled out his phone before I was finished writing. I was told later that he “would have preferred a systems language” - suppose I could have been told that - would have happily swapped to C! The only job I’ve ever been turned down for in my life!
by diehunde on 6/28/22, 4:04 PM
by rychco on 6/28/22, 11:44 AM
by giaour on 6/29/22, 10:57 PM
by jmartin2683 on 6/28/22, 8:05 PM
by ctvo on 6/28/22, 1:19 PM
Take home assignments, deep dive chats, pair programming, temporary contracts all have their own drawbacks. I've tried everything but temporary contracts.
We're measuring along a few dimensions:
- Can someone fake this expertise?
- What's it like working with this person?
- What's their general intelligence?
- What's the depth of their prior knowledge in computer science?
- What's the scope of the problems they've solved previously?
For both the candidate and the company:
- How long does it take to conduct the interview?
- How much money does it cost?
- How much legal resources?
You can assign points to each interview type, and at scale, I'd take whiteboarding over the alternatives.
by aristofun on 6/29/22, 12:30 PM
Now all you have to do to work there is to win faang interview lottery :)
by Balgair on 6/29/22, 1:13 AM
I wonder how FANNG execs feel about the answers here. Same for start-up execs.
by shetill on 6/28/22, 11:38 AM
by hnfong on 6/29/22, 5:18 AM
Whiteboarding is a good proxy for general "intelligence" (similar to "IQ"). It doesn't test real world scenarios, but if you're looking for a smart person who knows how to implement code without having to rely on IDEs and autocompletion, this isn't a bad test.
That said, whether "smart" and "can code" translates to job performance really depends on what the job requires. For the FAANGs that does centralized hiring, this "job requirement" is basically the best they can do for software engineers, since more specific requirements run contrary to the centralized hiring process.
For the non-FAANG companies that cargo cult FAANG practices, it should be pointed out that it's really contrary to their interests to do it, since they're competing directly for the same pool people with less resources, even though they may not actually "need" the same people, and are in a position to hire more specifically for positions that they require. If I were hiring, I'd actually actively try to structure the interview process so that it will select for quality candidates that have a higher probability of being passed on by FAANG processes (or, in short, don't copy FAANG interview questions!).
I suspect there's also an element where salty interviewees overestimate the difficulty of the whiteboard questions they get asked. I'm not saying we should disbelieve them, but I've personally seen cases where the interviewee mis-interprets/mis-categorizes the question asked, and then complained about it. Also, Dunning–Kruger effect and that stuff.
That said, in general criticism of "FAANG-style" interviews is probably well grounded, but honestly, for the FAANGs that rely on these styles of interviews (as pointed out in some other comments, not all of them interview the same way), the inertia is so high that it's unlikely anything could change their practices, especially that it is in fact in a local optimum (there's nothing "better" if you do centralized hiring at scale). It's crucial that smaller companies don't cargo cult these practices though.
by lesuorac on 6/28/22, 2:08 PM
I would argue it's mostly about filtering out somebody bad and not testing if somebody is actually really smart. This might sound the same but it's not. It's a Type1 vs Type2 error [2] situation and for the most part, FAANG is ok with a process that won't hire every qualified candidate as long as it doesn't hire unqualified candidates. But that's the ideal situation. A lot of people within FAANG do not like to do interviews but are pretty much forced to (or you get an arbitrarily lowered performance eval) so there's definitely going to be a lot of interviews done by a really disinterested interviewer and that's not going to be a good experience.
I do base my interview questions on a real problems and I've had a few interviewees ask me "if they'd ever use this at work" and I just ask them how they think X feature works and I've never gotten a response back from them after that ...
w.r.t. how long FAANG interview process takes (i.e. month+). No, this is outrageous but I don't have much visibility into where the problem is but it's not with the interviewers (average feedback is reported under 2 days for an entire slate).
Depending on when the blog was written they may also be very correct. Google was known for using rather un-job related questions [3].
[1]: https://www.semanticscholar.org/paper/The-Structured-Employm... [2]: https://en.wikipedia.org/wiki/Type_I_and_type_II_errors [3]: https://www.wired.com/2014/08/how-to-solve-crazy-open-ended-...