I'm glad to see this getting roasted in the comments, as it's a really good example of how companies put out self-serving pseudo-statistical nonsense in an effort to promote themselves.
There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer. It makes no effort to show where these engineers are going, what the interview process they're completing looks like, what impact demographics had on this, etc.
I find this stuff repugnant. It perpetuates the myth that there's something really special about Silicon Valley engineers, while making only lazy and perfunctory efforts to examine any alternative explanations than "this is where the rockstar ninja coders work." Shameful.
"There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer."
And yet, these scores are measuring something and averaged across tens of thousands of technical interviews, you have enough statistical power to average out the particularities of each interviewer.
I'm sorry you find the results repugnant, but the results are what they are. And the article did have a large section on limitations of their analysis.
It's not the results I find repugnant, it's the assumption that the results have any real world validity. The _something_ they're measuring is as likely to be demographic biases as it is technical or communication skills.
> it's the assumption that the results have any real world validity
Of course it has real world validity. It has real world validity regarding how well people are likely to do in tech interviews.
You can feel free to say that the way the tech industry does interviews is bad, or biased, or has all sorts of problems. But having vague measurements, such as "technical" or "communication" skills sounds very accurate to how tech interviews are actually done, in the real world.
All the moral outrage everyone is having about this, seems to have nothing to do with the accuracy of the report, which is about measuring tech interview performance. And it instead seems to be regarding the tech interview process in the industry, in general.
Over the last 22 years I've interviewed hundreds, hired and directly managed probably close to 100 individuals in widely varying environments, ranging from a 25k employee state university, to startups of various shapes and sizes, to the hottest SV IPO of 2020.
The reason for interview practices to be the way they are is to raise the floor for FAANG, who have high internal complexity and high salaries, leading to a very fat hiring funnel with a high risk of turning into dead weight in "rest and vest" mode once they get inside.
However humans are smart, and interviewers are lazy, so inevitably the people who optimize for this process start beating out more able engineers who don't have the time or inclination to jump through these hoops. In my experience, the proportion of really good software engineers is roughly equivalent across all companies with baseline competent technical leadership. FAANG does have a lot of the outliers on the high end, but they also have a lot of folks who can't tie their own shoes without the support of world-class tooling, infra support, and technical design guidance that those companies surround them with.
All I can say is that in my experience the average programmer at a company with a highly selective FAANG-style interview process is far sharper than the average programmer in the industry as a whole. Additionally, managers at more selective companies tend to be less parochial and less micro-managing.
The process isn't perfect, and it has some type I errors and a lot of type II errors, but it's a lot better than just throwing darts at a stack of resumes.
We're not in disagreement, note I explicitly did not say "the industry as a whole", and I added the qualifier "competent technical leadership". There's a lot packed into those three words, and without it you'll steadily bleed your best talent.
Micro-managing, non-technical leadership is the failure mode you're pointing out, and it's definitely the worst of all worlds, far worse than any failure mode at a FAANG. But on the other hand there is also parochial leadership who knows what they don't know and how to trust talent. Those environments can actually be fine for technical people. Granted, they won't necessarily get exposed to the exchange of ideas and mentorship from FAANG, but that's not a deal breaker in the modern internet age, and autodidactism has its place in furthering the state of the art by side-stepping social convergence to "best practices".
And on the flip side, I agree FAANG people are "sharper than average", but there are also headwinds to retaining the best talent. One is that you have to have a tolerance for moving slow, jumping through hoops, and generally dealing with a whole class of friction which many high performing engineers consider bullshit. Some will suck it up and deal with it to get the fat comp packages, but there is now an entire generation of <35 engineers who have had expectations set on comp levels based on a decade+ bull run of tech stocks which I suspect is unlikely to repeat over the next decade. There's also the appeal of working on classes of large problems that is only available at the biggest tech companies, but the actual interesting work is much fewer than the number of engineers. The majority are just dealing with incidental complexity and requirements of scale itself which can definitely occupy the mind, but may lead to an itch for more tangible impact.
Finally I will say there's a middle-ground between FAANG-style interviews and "throwing darts at a stack of resumes". If you are a small to mid-size company without the brand appeal and top-of-the-funnel recruiting volume of a FAANG, then you are absolutely shooting yourself in the foot by cargo-culting the FAANG approach. You know what the alternative is? Have qualified people do traditional interviews, going deep enough to get a gut feeling on their technical competence. Of course you'll get some Type I errors here, so then you have to actually pay attention to what they're doing once they start working. If they are not able to ramp and be productive in a reasonable amount of time, then you have to let them go (or at least pivot them into a position where they don't do damage). Big companies can't do that because there's enough chaos, lazy managers, and HR legal fears that Type II errors are a material risk. In summary, FAANG approach is solving for specific circumstances that most companies don't have, and it leaves a lot of talent on the table which is an arbitrage opportunity for companies willing to do the hard work to think about their recruiting strategy from first principles.
It doesn't necessarily say anything other than "where you work now has some predictive value for how well you'll be perceived by interviewers" in which case these aren't "top performers", they're "people with jobs" and it's just regurgitating a truism about how it's easier to get a job if you have a job.
I'm not buying this as a moral outrage question, I'm wondering if this is adding anything meaningful for us to look at or if it's just a surface-level puff piece masquerading as an analysis.
Except that interviewing.io interviews are specifically designed to be anonymous. This is even stated on their website. The interviewers do not see a resume or job history. As a candidate who's done a couple of interviews through them, the interviewers never asked anything about my background either. I don't recall even uploading a resume.
Yeah, but there's in-group jargon and technical approaches that FAANG and FAANG-adjacent engineers will pick up on. Just because you're anonymous doesn't mean you aren't unconsciously signaling your background to your interviewer.
Do you honestly believe this jargon is so unique to FAANG and "FAANG-adjacent" (that's a new one) SDEs that it can be used to pick them out, yet so secret it's not yet become industry-wide jargon?
Are you claiming they have no validity at all? Like if you built 2 teams: one team with candidates that all got 0% on the test and another team with candidates that all got 100%, you'd expect no difference at all in their real-world performance on a difficult problem?
If you're claiming something weaker than that, can you state it more precisely?
> It perpetuates the myth that there's something really special about Silicon Valley engineers
Does this myth still have much traction? If anything, my general regard for engineers in the bay area has steadily declined in the last few years. There are so many really worthless folks who have only figured out how look like they have a clue, but go any deeper and they flail. I know I'm painting with a broad brush, and that's not fair, but most of the great engineers I work with are at various other places around the country, not California.
Personally, as a (now relocated) Bay Area native, I agree. Over, I think there's still a lot of prestige for large Silicon Valley firms, even if some of the gloss has justifiably started to fade.
This article certainly assumes that myth is still in place.
Hey, it's not our fault! Where else are posers gonna flock to to pose? I swear, even with all the remote work, more than 95%* of the people that worked here 5 years ago still do. We live here.
I realize I wasn't being very nice, and I apologize for that. I'm quite sure there are plenty of very smart people who live in the bay area and do great work. With such a high concentration of engineers it makes sense plenty will suck, too.
My experience is biased, of course. My company has offices all over the country and a couple years ago opened a new office in SF, and I'm 93.4% sure we don't exactly pay FAANG-competitive salaries there, which affects the quality of who we can recruit there. It's kinda like how we hire engineers in Hyderabad for 1/5 the US rate and then wonder why we more often than not get substandard performance.
I know a ton of engineers. Of them all, those working FAANG are profoundly less skilled than the others. It’s impossible to miss its so obvious. Maybe my social network is an outlier, but I really really doubt it.
Well I'm not a FAANG-er and I didn't downvote, but it seems weird that the ones at FAANG would be so obviously less skilled than the others, It would be believable that they were the same skill level but otherwise seems weird.
Unless of course the weighting of his groups is off somehow, which should have been noted.
Actually it doesn't seem like the statement was very informative.
I mean. It was a statement of fact, you can believe it or not. The engineers I know who went into FAANG (half dozen different people) are literally the worst engineers I worked with.
Author here. Yes, the skills are left to the interpretation of the interviewer, but most of our interviewers are senior engineers at FAANG. We've done quite a bit of work internally to make sure your interviewers are well calibrated, and we have a living calibration score for each one (calibration is based on how the interviewees they interview end up doing in real interviews).
The interviews in question are a mix of algorithmic interviews and systems design interviews.
Also, if I ever use "rockstar" or "ninja" in my posts, I hope someone finds me on the street and punches me in the face. I'd deserve it.
FAANG interviewer here. I've conducted many hundreds of interviews for multiple FAANG companies. The totality of my training in how to interview is about 4 hours (when I combine the training of each company). I have zero confidence in the usefulness of calibrating my answers and expect that neither I nor anyone else who does this would reliably score the same person with the same score most of the time, outside of a small percentage of outlier candidates.
Just to argue against myself, I do think that if enough interviewers interview a candidate, you will get an aggregate score that will give you a pretty good sense of how well that candidate is likely to do on other interviews, and I have found that candidates that do well on these interviews tend to make good employees, although that's based only on the fact that everybody I work with has passed one of these interviews and they've mostly been pretty good employees. I suspect a lot of people who fail these interviews would also make pretty good employees, but I'll never know.
I've been in interview rounds where if ONE interviewer doesn't like how you did on the stupid coding test, you're out. So what you are saying is pretty much BS.
I don't think I follow you. Your company has a hiring process where it rejects all candidates who fail any interviews. How does the existence of such a system invalidate anything I said?
>I suspect a lot of people who fail these interviews would also make pretty good employees, but I'll never know.
Why don't you take a leap of faith once in a while on someone who hasn't done well? Especially if you ever interview interns and have a bunch on that team, that's a near zero-cost gamble for a large corp.
Well, the answer to "why" is "they don't put me in charge of the hiring process."
Related, interns are a fantastic way to hire because you get WAY better information about them. Instead of an hour or two of riddles, you've got a three month work history which a couple of trusted employees have witnessed. That's WAY better signal than any whiteboard interview problem could possibly get you.
also, what you do with interns, could be done with any developer you hire. you just hire them for a few months and pay them. if you like their work, keep them. otherwise let them go. no need for coding interviews.
Is there any statistical reason I should assume an interviewer at FAANG has some kind of special insight into what good technical and communication skills are? Were any attempts made to adjust for demographic biases? Is it possible, for instance, that Dropbox engineers are disproportionately taller white dudes with nice hair? I know that sounds a bit unserious, but all of those factors would increase the likelihood of a higher score.
I appreciate that you never used "rockstar" or "ninja". That dig was a bit unfair of me.
no, we didn't adjust for demographic biases because, look, we're an anonymous platform. we periodically survey our users to get demographic data, but it's not something we ask in the app because we've never been able to resolve "tell me your race & gender" with "hey we're anonymous, and you'll be judged on your performance and nothing else".
last thing i want is to perpetuate stereotype threat inadvertently. it's possible to do this right, i think, but we haven't gotten there yet.
The point about stereotype threat is valid, although it also means that we're unable to get any insight into a major axis of interview performance.
I think what we have here is an attempt to imply, consciously or not, a causal link between interview success and previous/current employment. But without drilling into the other factors underlying their success, we get a lot of noise and not enough signal. Couple that with the continued mythologizing of FAANG greatness, and you get an article that perpetuates two of the more toxic notions in tech: FAANG is the top of a pyramid and talent is concentrated in a handful of companies. Neither are true, and neither are probably your intent, but that's how this reads.
> "but most of our interviewers are senior engineers at FAANG"
Wait, so the result is that "interviewers from FAANG companies rate highly interviewees from FAANG (and FAANG-adjacent) companies"?
Or maybe more causally: "those who have already passed FAANG-style interviews are more likely to pass interviews conducted by FAANG people"?
I appreciate the mission here - but if the idea here is to give people a fair shot even if they don't have the pedigree of FAANG, building FAANG interview styles into the system seems counter to your stated goals. If anything these results are concerning - you can interpret the findings in (at least) two ways:
- these companies hire or produce superior engineers, the results you got are indicative of a broader higher caliber of engineer in those companies.
- the interviewing exercise is optimizing for "people who can pass/have already passed FAANG-style interviews", which rolls in all of the myriad biases of FAANG hiring and perpetuates them.
> "It's a well known fact that SV titles hold next to no weight."
Except at FAANG; the rewards are so great that the competition is fierce. Whether they gained their levels through engineering ability or savvy politicking, you can be sure they are adept in at least one of the two.
While there may not be any reliable measure of communication skills across the industry, the fact that the data was based on scores given my a large amount of people, that by definition means that it's accurate.
Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology.
"Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology."
So if a lot of people rate you as a great leader, you are a great leader? Even if you lie to their face and delover terrible results? Objective reality doesn't matter?
So the greatest leader in the world is in North Korea?
Communication is a clearly measurable skill. Just because a lot of people have been sold a lie, that doesn't make it true
> It perpetuates the myth that there's something really special about Silicon Valley engineers
Why are you sure it's a myth? My prior would be to believe that engineers at the most exclusive companies with the highest hiring bars that pay 3-5 times more than average would be better programmers. The article is just one data point confirming what intuitively should be true.
If Silicon Valley engineers are no better than anywhere else then someone should notify the execs at FAANG, I'm sure they'd be interested to know they are dramatically overpaying for talent.
I don't understand what is so uncontroversial about this. SV companies recruit the best talent from around the world and it's where the best talent wants to work. Similarly, the best financial talents are in NYC and London, the best actors are in Hollywood etc.
There are capable people who don't want to live where 1000sqft home costs $1M. Never mind all of the other problems with the region. Selection bias doesn't prove anything about the people you aren't selecting for.
> It perpetuates the myth that there's something really special about Silicon Valley engineers
In my [disillusioned] experience, this holds true: Silicon Valley engineers are very good for building throwaway MVPs that they won't have to maintain more than 3 years.
I've been very disillusioned by the quality of the software written by Silicon Valley companies, but in hindsight it makes sense: "Run fast break things" development culture resonates with the "raise VC money every 18 months" business culture, and then look for an exit in 5 years tops. There is no incentive in Dev or Business to really develop good software.
While I agree that this particular exercise is riddled with problems, I simply cannot image Hacker News rolling over and accepting evidence-based answers to questions of this nature, regardless of where the data came from or what the methodology was.
> There’s also the issue of selection bias. Maybe we’re just getting people who really feel like they need practice and aren’t an indicative slice of engineers at that company.
Or that your interview preparation platform prepares candidates better for Dropbox's interview process than it does for Microsoft's. Or that the people who were confident in their interview skills for Facebook decided not to use your platform. Or that these companies have different interview processes and selection criteria (they obviously do) so ranking "best" based on performance on different tests doesn't tell you that much.
There's hundreds of different ways to slice this data to come up with different hypotheses about what's actually occurring.
Author here. The data is mostly drawn from how people who work at these companies do in mock interviews rather than how our users do in real interviews with these companies.
You should probably change the title of the blog as most gainfully employed people interpret 'best performers' as people that are very good at performing their job and/or trained for a specific Circus Act.
Something like "We analyzed 100K technical interviews to see which companies employ the people that we feel best performed in our mock interviews" would be more authentic
There's still the selection bias of who volunteers to do these mock interviews. Probably its the people who want practice interviewing and at dropbox those are the top performers who want to "move up" to a Google, while at Google it's the people who aren't cutting it and know they are going to have to find another job soon.
The data and charts in the article look pretty nice!
One of the things I learned from my years in research/academia is that Design Of Experiments in itself is a pretty complicated task. Most experiments/studies are invalidated due to a huge amount of confounding factors and correlations that are not factored in for the experiments.
A cursory visit to r/science comments would show a lot of people who do science for a living providing valid criticism to published peer reviewed scientific studies due to wrong Design of Experiments procedures.
Having lived all this first hand makes me EXTREMELY resistant to take seriously the data, analysis and conclusions of the linked article.
Other than that, the effort is appreciated and I like the ideas behind interviewing.io.
Dropbox was known as a hard place to interview since 2014, the tech companies that had the hardest technical interviews are Quora, Palantir and Dropbox(honorable mention fog creek)(this was from a few years back so things may have changed). Just because the company makes it extremely difficult to get in does not mean the company is generally all around awesome or pays great or employs the greatest engineers. It optimizes for people who generally come straight out of an elite CS program with those learned concepts fresh in their mind, and for people who grind out on leetcode or who are great at interviewing. Of the three companies I mentioned above I would not work for any of them now.
Are you suggesting that these jobs don't mainly consist of quickly banging out dynamic programming and DFS/BFS traversal algorithms as quickly as possible while a stranger stares at you? From interview experience, I assume this is what engineers mostly do at these companies all day.
My brother is an MD and I showed him some Leetcode prep material since I am studying for interviews. Specifically, a problem from Grokking the Coding Interview.
His response - “It would be helpful if they then showed how that is used in real world code“. I had to tell him I don’t think these kinds of scenarios come up often in real-world use cases, haha. They are essentially coding puzzles to filter for people who are good at solving coding puzzles, which may or may not directly translate to being good at writing application code.
Yeah "we are more selective and exclusive than Google and Facebook" was Palantir's whole schtick when they originally took off. And a lot of very smart engineers bought it.
Turns out it takes a lot more than a high leetcode bar for your interviews to run a successful company.
There's a local agency here in the city I currently reside in that touts they're hiring percentage is lower than Harvard's acceptance rate. They're telling you up front its really hard to get a job there and its a very "bougie" place to have on your resume - as if they're on the same level as the FAANG companies.
The interview process is fairly long, accompanied by an office tour which touts the numerous "Freebies" they offer their employees. Then you find out after the entire process that you'll be getting paid 4050K LESS THAN the industry average. At the time I interviewed with them, I had five years experience in UI/UX and mobile development. What they offered me was essentially what I was making as an entry level dev.
It was easy to turn them down. No amount of free Red Bull is going to pay my mortgage.
Since a good bit before 2014, even. They used to recruit very heavily from MIT (while Palantir was overrepresented at Stanford and Quora at Harvard--all of these reflecting the almae matres of the founders). Note that this was before the popularity of Leetcode and the whole cottage industry around trying to game algorithmic-type interviews. I'm not sure if similar companies founded today would push these algorithm-heavy interviews as hard, since they've probably lost some signal now & prevailing attitudes have changed a bit.
At any rate, it doesn't surprise me at all that Dropbox engineers do better than FANG engineers on these technical metrics. The average Dropbox engineer is almost certainly a bit smarter and a bit better at algorithms than the average FANG engineer. Of course those attributes don't automatically translate into being a better engineer, though, nor do they automatically translate into company success or anything.
Kudos to interviewing.io to share this analysis. I agree with the many issues in methodology and analysis that others have raised here, and I agree there's a risk that a face-value reading of the blog post is highly misleading. But this is true for all data, and poo-pooing the analysis without crediting the sharing just leads to less sharing. To be clear, I'm supportive of the criticism, but let's also give credit where it's due.
Technical interview performance is a high stakes field for which almost all data is cloistered in individual hiring companies or secretive gatekeepers. In my mind, all efforts, even imperfect ones, to share data is a great step here. We should encourage them to continue to share, including pursuing options to anonymize the raw data for open analysis. The field deserves more transparency and open discussion to push us all to be better.
This is super interesting! Worth noting that another possible title is "... where the best performers were trying to leave during the study timeframe".
Really interesting to see dropbox so high - would be curious to see some other data to corroborate that they (at least used to) employ the best engineers.
From my time interviewing, I've seen clusters of very good candidates often be more reflective of which top companies were having a hard time, internally or externally. There was a while where my company hired a lot of people from Uber; right now we're getting Amazon and Facebook/Meta.
"best performers work" With a title like that you really just cannot take this study seriously lol. Not to say it's not interesting but that is one crazy claim, at best title should be "most effective interviewees." Also, I work in FANG but signed up for this website and can't even participate, so how you chose all these candidates is also questionable.
The author confuses successful products with high performing tech and high performing engineers. I’ve met many brilliant indie engineers that build highly performant code that almost no one knows about yet it provides these people with a steady stream of income and they would never work at faang or unicorns.
I don't see this claim being made anywhere. The article claims the opposite: the best performers on their technical interview happen to be from Dropbox/Google etc.
I was thinking the best performers may very well be at one of those companies, but there's a pretty good chance that they never did any kind of coding interview. Maybe they were hired through a merger or acquisition or perhaps their reputation or portfolio is more than enough.
And the implication is the 'quality' of engineers at the companies is actually reversed - the top performers at Dropbox are struggling and leaving while the under performers at FANG are struggling and leaving.
As an ex-Dropboxer, Dropbox asks legitimately tough questions. I only got in because I got asked the exact set of questions that I could figure out the answer for, once I joined and went through interview training I realized I would have failed about half of the questions that Dropboxers ask.
I'm also not sure how it is at other companies (at Google but haven't gone through interview training yet), but Dropbox's rubrics are also pretty strict. Doing "well" on a question requires getting through multiple parts normally with close to zero bugs that you don't catch yourself.
You just described the entirety of the tech hiring experience. Sure some people are bonafide geniuses that can crack the hardest problems in their sleep. The majority of tech workers, however, are ones who got lucky with the specific combination of questions asked in the interview. Maybe they had seen the question before. Maybe the solution just "clicked". This is why the best strategy for acing such interviews is – just apply to a lot of companies.
Of course, but this is a gradient. Struggling startup may ask a question that 80% of engineers are capable of answering (through luck, exposure, experience -- whatever dimensions).
Dropbox asks difficult questions, and it's hard to discern why. I don't believe the problems at Dropbox are particularly difficult relative to its peers. I don't think they innovate at a clip that's outsized, etc.. But they do this, and their engineering culture focuses on this.
The joke internally is that Dropbox asks lots of concurrency questions because the Dropbox client has 50+ threads :-)
That said, I think what I noticed at Dropbox is that asking lots of tough questions gets you a lot of pretty talented folks who are very interested in solving hard technical problems. So from an infrastructure side, Dropbox was overflowing with talented people. From the product side, though, it was harder for teams to staff frontend projects or make progress when their ideas were challenged by infra.
It's easy to criticize the current state of tech interviews but I've never really heard anyone propose a better solution.
Startups ask questions 80% of engineers can answer because they don't have many applicants. Dropbox might have 50 decent applicants who could all probably do the job for every opening. How do you decide who gets it? Ask an easier question and you end up with way more passes than you have openings.
To be clear: not criticizing the current state of hiring. I agree nothing better exists, and it's a narrow mindset to think companies do this when there are better approaches.
My point is Dropbox asks questions that are beyond what its peers ask, even companies not its peer (FAANG, which we don't think Dropbox is apart of).
This is an explicit decision by the engineering leaders at Dropbox. It's interesting, and I'm wondering if it works for them.
I think people put too much effort and time in this whole interview business.
My suggestion, based on my experience, is to spend a reasonable time like a month, brushing up on most common data structures and algo.
Spending a month on interview training is exactly what I would consider unreasonable effort.
Here's a scenario – I have worked at big tech company A for a decade writing backend code. Big tech company B has many openings for senior backend coders, and is desperate to fill them. B's recruiters are hounding me every day to consider a switch. The job looks interesting, and the salary and benefits are great.
Should be a perfect fit, you say? Except if I interview with B today I'll get rejected at the very first coding stage. The barrier to entry they have created for themselves is me taking time away from my current job to solve programming puzzles, solving them perfectly in an interview setting, then throwing away all of that knowledge. They are never going to make me go through this effort unless I am truly unhappy at my current role.
This is the exact reason companies are finding it so hard to hire engineers, and why they have to pay them so much to switch.
Have you ever ran interviews? I have, and there's an incredibly wide range of candidates, all of whom with fantastic pedigrees, but some of whom are completely incapable of demonstrating their problem-solving ability. Likewise, some candidates have poor pedigrees, but make it clear to me that they can think, communicate, and code.
I'm not saying that some of my rejections can't do that. But I can't measure what I didn't observe.
I also get mountains of emails from recruiters about how deeply impressed they are with my skills, and how I should go work for their firms. I know that they are completely full of shit, because from looking at my LinkedIn, they have no idea whether or not I have any of those skills.
The issue is filter accuracy. I have seen some of my favorite coworkers, who showed great pairing skills in the real world, melt down in typical algorithm interviews. I've also seen great performers in interview that were obvious terrible hires, from a purely technical perspective, within a month.
I've interviewed four digits worth of candidates, under different rubrics and different expected difficulty levels, and all the good I can say about the modern interview is that at least it tends to crib out the people that can't write code at all, at least if you are making sure nobody is feeding them answers. But can I say that interview performance with me, and how well I rated the person's work when they were hired and ended up working close enough to me, had much to do with each other? I don't think so.
That's why, when working at a small enough place I have some control over the interviewing process, I'll offer options to the candidate, and dedicate far more time to the process than I would in a large software firm. It's OK to just raise the technical bar enormously when you are offering the best salaries in the market, and you can expect to never run out of candidates. But when you are not competitive, you have to do something to find great candidates that don't look wonderful in a FAANG style interview format, but will be very good in practice anyway.
You and the person you reply to are both right. Having been on both sides of the aisle, after working for almost 2 decades, this really makes me not look forward to switching jobs anymore. Interviewing for a job really is the worst part of working as a software engineer.
I have, but I just have the canidate brainstom the solution to a problem which gives me an idea of their range, ask about what they have worked on and what they would like to work on
has worked out well so far tbh most of these interview processes are overengineered and geared towards nonsense
it's funny as a side-effect our teams are probably the most diverse and well balanced across skill level I've seen in my carrer too
I used to get recruiters messaging me on LinkedIn about my impressive Java experience, in spite of not mentioning Java on my profile. At the time, I was working for a company that used a lot of Java, but I was working on a .NET based product and hadn't touched Java since college.
If B is desperate, they'll just increase salary and benefits until enough people able to pass the interview start applying. Adjectives like "desperate" don't reflect what actually happens: they priced the cost of possibly not hiring soon, of using a different hiring method and of changing the benefits, and they computed (badly?) the optimum to be where they are.
If they become more desperate, things could change, but they don't and things stay the same.
I will throw out counter-experience that I did not succeed at technical interviews until I treated it like a fulltime job. In the past I just brushed up and failed Google's interview a handful of times before I decided enough was enough.
My takeaway ultimately is that how much time and effort one chooses to put in is entirely personal based on how much they want one of these jobs + what their gaps are in algorithms and data structures. Some people are going to need more effort than others.
My strategy is simple: I do interview practice by simply interviewing with companies that I am not really interested in joining. That allows me to practice not only the technical interview but also all other steps involved.
Last time I interviewed, I did 7 of these interviews, targeting: startups, pre-ipo, mid public companies, 2 of the FAANG I would never work for ...
I did that while prepping for interviewing at the companies I was really interested in working for. And eventually it all worked out and got a few offers, of which two from FAANG of which one from one of the two FAANG I really wanted to join.
Simple but really time consuming. Experienced engineers with responsibilities outside of work don't have time for that nonsense. It's hard enough to do ONE interview, let alone do tons of fake interviews in preparation. No other industry is like this - once you've proven yourself as an experienced professional/exec, if you want to swap to another company, you just have a chat.
"I don't think I am ever gonna job hunting again."
That's what I thought about the job I came to SF for 20 years ago. Then 6 years in they got bought out and I got laid off. They weren't even a tech company. My career has been somewhat downhill ever since. Best of luck to you.
I think the value is mainly in the feedback you get which can help if you have a specific problem you might not be aware of. The problem with just going out on interviews is that you never get any real feedback so you are left to your own analysis as to why a certain interview didn't result in getting to the next step or an offer which, in my experience, is kind of like trying to find a needle in a haystack in a dark barn without a flashlight.
Suffice to say I'd never join Meta. I don't use their platform, I don't like what they do, and most important I don't like their tech bro company culture.
the process is incredibly inefficient and disrespectful. people have already paid for university degrees and getting a degree is no joke, certainly not at a european university. graduating without programming skills is impossible.
what companies should do is ask you to walk them through a piece of code of your choice and discuss that and/or hire you for a month as a contractor and pay you a fair market rate.
> graduating without programming skills is impossible
I’ve encountered many who graduated from bachelor and even PhD programs without programming skills (even from European universities), so your statement is false.
It also seems pretty weird to assume that a degree or certificate is foolproof for demonstrating programming skills. That seems like a lack of critical thinking skills.
I can only speak for myself, but I've never met a (CS) candidate from an accredited Scandinavian university / college that simply could not write code.
There are obviously candidates that struggle with performing under pressure - even those that have gone completely blank, but through some simple guiding have bounced back.
To be frank, at the schools I studied at, it would be impossible to fake your way through a whole degree, unless you either got someone to impersonate you and take your exams, or you cheated your way through every single exam...but even then, your grades would be a big red flag.
Sure, there are bad programmers out there - but not being able to punch out a line of valid code, after obtaining a whole degree in the subject...seems extremely unlikely - sans the extreme edge cases.
Even the straight E / D students I've worked with managed to cobble together working stuff, without the aid of search engines or stack exchange.
I suppose you’re very fortunate, but I wouldn’t generalize anything from your experience. Just like I wouldn’t generalize from mine.
I’ve worked with programmers from Scandinavian university degrees and some sucked. Not sure how they passed, but the degree was 15 years old and this was the early 00s.
Maybe Scandinavian countries have more rigorous degree programs. If so they are fairly unique in that regard. I don't see how we can use that during hiring thought. What are tech companies supposed to do: only hire from these universities? That seems less fair than the current process.
how so? you are rewarding students for working hard to get their degree? the current process basically tells them: we don’t care you worked crazy hard, it’s back to square 1. they might just as well gone straight into an internship after high school…
also: this is not just the case for scandinavian universities but also the case for universities in most of western europe especially if they are non-private.
I believe you that this is the level of talent at western European universities but what about all the other universities outside of Western Europe that aren't like this. Should companies exempt you from a grueling interview process only if you are from one of these schools?
sure, why not? companies collect info about what school you graduated from and your grades anyway already. they can associate that data with who they fired to decide if they want to hire out of that school.
i don’t see how that would skew the stats as it would affect only a few courses you take and you can still do decent and self study if the professor is crap.
well, where i graduated, people DID have those skills. so you are wrong.
besides, companies already ask what university you went to and could get your records and individual class grades and use that combined with number of people fired that had the same degree to figure out if they want to hire you or not. after all, we are in the era of data science?
and instead of paying recruiters lots of money for shuffling word documents around, they can use that money to pay prospective candidates for a month. it will be cheaper.
the real reason for coding interviews is to wear you out so they can pay you less. the problem is that people go along with this clown show instead of simply walking away and doing something else.
You end up with a far less merit based hiring pipeline. I work at a top tech company and don't have any degree at all. The current process gives people like me a chance, yours wouldn't.
the current process rewards people who memorize and grind leetcode answers, rather than those with an actual education and skills, which is a problem. so no, the current process is not merit based but rather measures to what extent you are willing to exhaust yourself to get a job.
I did maybe 5-10 hours of preparation for my current job's interview and most of my colleagues didn't grind either. I'm skeptical of this idea that the only people who work at FAANG are Leetcode machines that have sacrificed their lives to grind questions.
then maybe you are an outlier and were asked the exact questions you prepared for. most people here mentioned 2-3 months of grinding and that is what i also experienced firsthand.
the interview process does not optimize for great people with relevant experience but for those who repeatedly grind leetcode problems irrelevant to the job.
i interviewed a while ago at a FAANG where they had a long standing problem related to their OS. i knew the solution because i recently brought a product to market built on top of a similar OS and had to address the same problem.
but it was more important to them i passed their coding test within a few minutes, and that i choose the optimum solution, which can only be found if you can spend an hour thinking about it. i bombed the interview due to not being used to coding interviews and having to talk while i think, and having to do all this in a few minutes.
they are probably still wondering how to fix their problem.
I meant risk for the candidate. I have to quit my current job just to try out for yours. If after the month you decide not to hire me I'm now unemployed. Sounds like a terrible deal for me.
A month? That's what I would consider "too much effort". What I do is:
- read the whole website of the company I want to work for
- apply
That's it. If after N years of experience I'm not able to pass the tech interviews then I'm not a suitable candidate for them, and I simply accept that. To me, spending a month memorizing and refreshing concepts I learned when I was at uni, feels like cheating (because I'll forget what quickly studied in a few days).
I do "study" on my own constantly, though. I'm always reading tech books and trying new things. I do it at my own pace. Perhaps that's the reason why I don't prepare myself for interviews.
> If after N years of experience I'm not able to pass the tech interviews then I'm not a suitable candidate for them, and I simply accept that.
I wonder if you are thinking of the same tech interviews that anybody else here is talking about. Those tests are almost inversely correlated with N years of experience. Without specifically studying for those tests you would definitely score lower than an inferior (but well prepared) candidate.
It's insane to me that we have to rehash everything every time. You graduate, you leetcode, you rehearse for a round of interview, and then you have to that all over again. So boring.
This is a lot easier to say when you're established in the industry and you have a "feel" for interviewing. When you're breaking in, you have to learn all the computer science shibboleths and industry jargon before people will take you seriously as a candidate.
They all mostly ask the same sets of questions, too. If you get turned down by one interviewer, just keep track of what questions they asked, go look up the answers to those questions and you'll most likely be asked the same question by the next interviewer.
Does this happen in any other field? If I was a doctor and wanted to work at some other company, would I need to study the MCAT every year in order to pass a screening interview based on one possible question? What's the equivalent in other fields? Closest I can think of is an acting audition but even then they give you the script beforehand. I'm beginning to think that the industry somehow settled on this approach not so much as a skills verification process but, by making the process so onerous on the candidate, talent retention is a lot easier.
Software engineering is the only field I know of where someone with zero pedigree or education can make as much as a doctor. If you can pass a grueling technical interview then you can get a high paying job (200-500k+) no matter who you are or what your background is.
The tradeoff is that you have to go through the same grueling process every time you change jobs. I think the tradeoff is well worth it and would not like to see software become more like medicine where you have to put in 10+ years of schooling to prove your worth.
I don’t need a 200k+ job at a FAANG, but it doesn’t seem to matter as FAANGS are perceived as having the best hiring practices (because they wouldn’t be so successful otherwise, right?) so the entire industry just blindly follows suit.
> "If I was a doctor and wanted to work at some other company, would I need to study the MCAT every year in order to pass a screening interview based on one possible question?"
If you were a doctor, you would have gone through 4-5 years of supervised residency after medical school (https://en.wikipedia.org/wiki/Residency_(medicine)) to guarantee some minimum level of competency before becoming a practitioner. Pay is notoriously poor and the hours are extremely long.
Programmers have it exceedingly good and the only reason for that is that software is still in its growth phase. I doubt we'll all be still riding the gravy train in another 25-50 years.
Yes but once you get through your residency your expected earnings go up quite a bit and you don’t go back to square one. If someone had told me that I’d still need to prep for leetcode interviews at age 55 I’d have chosen the MD path given a choice, but I’m also old enough to remember the days when you could get a decent coding job based mainly on your experience so from my perspective things have in general gotten demonstrably worse in terms of process for anything other than a lottery-ticket FAANG position.
At least a welder actually welds stuff during their day-to-day. Still looking for a job that needs people who can implement efficient graph traversals though.
> I'm beginning to think that the industry somehow settled on this approach not so much as a skills verification process but, by making the process so onerous on the candidate, talent retention is a lot easier.
There are plenty of applicants who just can't do the job, and the cost of restarting the search all over again is high. It's important for a team to vet their new hires.
I’ve had interviews where I implemented a small feature on an existing application. It didn’t take any longer than your typical hour-long algorithmic implementation interview. Given that we don’t have post-hire data on any of these practices, my hypothesis is the one that most closely resembles the actual work would give the best signal as to whether that person would be able to do the job or not.
I agree that style of interview can be better. I think FAANG companies don't use this style because it's not selective enough and it requires special questions for every role.
- First of all, it assumes that `interviewing.io` is some sort of certification standard (which I'm willing to bet is the actual point of publicizing these 'studies' in the first place. It's 'fact' manufacturing)
- Then there's selection bias about engineers actually using one platform vs the other
- Touting the data set size in order to give the 'study' some credibility is a red flag for me. You can analyze millions of the same technical interview and deduce all sorts of conclusions.
- The use of 'best performers' is deceitful. It means 'best performers in the interview context'. But using it in the context of where do they work, it implies something like 'the best performing engineers are at company X'. Which is bullshit. More like 'best trained engineers to pass these interviews work at company X'.
Garbage. I'm flagging this as it's nothing more than self-serving marketing.
"Of course, the really interesting question in all of this is the holy grail of technical recruiting: Does performance in interviews reliably predict on-the-job performance? "
And until you can really say for sure this is the case, any speculation about the value of technical interviews other than just being a barrier to entry is really moot.
Going to be real interesting to see how I do at technical interviews whenever I decide to jump back into the job market.
At the job I'm leaving tomorrow, I just did a 2 hour long video session / training where I started by teaching how to read call graphs, and that led to over a hour of me trying to sort out a horrible performance issue in real time.
The problem, solution and the iterated debugging to deal with all the edge conditions that the extensive unit tests called out (and I wrote all the unit tests that blew up, so I get to take credit for all that -- although I also wrote the bug I fixed) should show that I'm very high functioning engineer. And I had identified the problem previously at a higher level and had a fix that papered over the problem, but during the video I correctly figured out that the real source of the problem was deeper in the code, and had existed before the change which surfaced the problem, and managed to do a data-driven analysis to track down the perf bug and go from 15% of CPU time in one subsystem to 1% of CPU time in the same subsystem for a 15x speedup on my problem (and probably closer to a 90x speedup for the customers who were reporting it--including a large customer everyone is familiar with here due to headlines they're involved in).
Meanwhile I forgot that it was obj.send(:method, *args) in ruby and tried to obj.call(:method, *args) and had to look that up because my brain was derping a bit on that, and the night before I forgot it was JSON.generate in ruby and not encode/decode and just in general my brain is a mash of too many different programming language syntaxes. At one point I caught myself trying to use `%` for comments because I had been doing Matlab writing an Iterated Chebychev Picard Method IVP ODE solver the prior weekend. If I can't work with the command line or an IDE and with google I'm just going to be a mess of trivial mistakes due to crossed wires.
I've also never reversed a linked list in my life and the correct answer to that question is probably to never use a linked list due to TLB cache thrashing at the very least.
I work there. Our attrition is not any different than other companies. We have magnitudes less engineers than the other companies mentioned, so most likely it's the few that interview on average are better from Dropbox than the average of other companies. That's all.
Or simply that they have significant bias in their interview process for selecting candidates with strong interview skills. At that point it’s tautological—if you only select people who ace interviews then you’ll have a bunch of people who ace interviews.
Author here. We didn't publish a histogram showing how many users we have from each company, so I guess you'll have to trust me on this one. Dropbox has waaaay fewer engineers on interviewing.io per capita than some of the other companies... many of whom didn't make the top ten list.
In our experience, they're quite good at retaining talent.
it's possible. we did see statistical significance when comparing dropbox to others despite the relatively smaller sample size. but yes, that is possible.
In this case, is 0% passing really a meaningful bar? You could flip the chart upside-down and do it on fail rates, and the differences would look like this.
Interesting how everyone here reads the title differently than me.
I read “where the best performers work” as not where do the best coders/employees/whatever work… I read it as people who perform the best work. Where do people who do the best performance work? And in the context of interviewing - I don’t find this particularly weird to look at. Interviews are a performance.
That said - I’m apparently a weirdo!
Also, limiting this to companies with 50+ employees who have used the platform is going to definitely only let very large companies in the analysis. Even where I’m at now (1500 eng) I would be surprised if 50+ had used the platform because 75%+ were hired in the last 1-2 years.
Also - the way everyone here critiques the rating system - what a joke. You think the rubric at these big companies is really any better? You just find a way to shove your core feeling into the rubric and that’s it.
Many others have commented on possible biases here. That's true, but I also find this a really interesting observation based on a huge dataset. Once you can correlate it to actual job performance - performance reviews, whether you were fired or not, maybe even salary -, it might be one of the best ways to see if technical interviews actually measure anything significant.
I would never want the best developers. Imagine the giant pain in the back of these primadonna always thinking they can get paid more in their next job. And how you should give them this or that, or how you should change your development stack, language, processes, chair color, etc. I only need very few geniuses and a lot of normal ones. 80/20, remember?!
So, the best developers are always primadonna's now? Maybe the ones who grind leetcode all the time because they have something to prove are the primadonas, and you are somehow selecting for them.
For skills, balanced team makes sense. You put a 10x type person in a room full of 0.5x people then yeah they'll start complaining about the chair color because the job sucks and they would kill for the chance to get on a better team.
The very best people I've ever worked with don't fit any of those characteristics you've mentioned.
Although I will say if a developer of yours wants a different chair, get them a better chair. They aren't that expensive and they are durable. Sometimes the overhead of getting $1000 purchases approved adds up to almost the same as the purchase itself. Give your developers some leeway to order equipment, software, books, etc... without approval up to some reasonable limit. It won't cost the company that much and your people will be happier and feel trusted.
This just sounds like Dropbox, Google have a similar interview to interviewing.io. So if you're able to clear the Dropbox interview bar, you'll do well on their test. It doesn't really say anything about general work performance, only performance on a specific interview.
I always enjoy your work, @leeny. I remember quite a few years ago you were working as a recruiter and I liked working with you then, although we didn't wind up with a placement for me.
It would be interesting to show error bars on the graphs. Seeing 95% confidence intervals would give a much better idea of whether we're looking at meaningful signal or just noise
The fallacy to these findings and conclusions is that you're measuring processes of your own brand and flavor; the biases are so deep you likely don't recognize them. 'technical', 'problem solving', 'communication' these are all of your own construction, setting, presentation and scoring. Look at the failings of the IQ test for example.
I'm curious if people just really need to learn the talk and the walk of a silicon valley employee to land a job at a FAANG.
> The fallacy to these findings and conclusions is that you're measuring processes of your own brand and flavor
> Look at the failings of the IQ test for example.
You are mis-interpreting the point of all of this. This website does practice interviews. It doesn't measure how good of an engineer someone is, or how smart they are, and they don't claim to do so.
Instead, it measures how good they are at tech interviews. And for that purpose, the study works well enough.
All this post says is that among the candidates who scored highest on this model, a higher proportion worked at these companies.
But there's no inverse analysis: of people who worked at these companies, how predictive was that overall of a higher score on this particular assessment?
'Our five highest scores ever were all people who wanted to leave FooCo' tells you little about the overall quality of FooCo employees. Maybe the rest of them are terrible and these five needed to get away?
If Dropbox engineers are so good across the board, why are they interviewing at all? Surely the desire to move away from this kind of technical nirvana would be almost zero?
Dropbox was famously leading the SWE comp charts[1]. In particular, they focused on equity grants both pre- and post-IPO.
Well, live by the equity sword, die by the equity sword: SP500 is up 16 percent in the past year, DBX is down 10 percent. Even over the long term, DBX is down 15 percent since their IPO 5 years ago, while SP500 is up 77 percent. It's the kind of performance that results in layoffs[2], which will probably hit your most expensive / experienced engineers first. Glassdoor ratings suggest remote work did not go well when combined with the layoff.
I honestly don't care which company retains the best performers. Is none of my business. Maybe if you are an investor in these companies, might be a useful signal to know such trivia. But from a candidate standpoint, this page is super useful -
https://interviewing.io/recordings
This reminds me, a friend of mine recently indicated that Dropbox has become an Amazon graveyard due to the mass exodus of talent from Dropbox to other companies that feel like Dropbox did about 5 years ago. Without their perks and fancy food, Dropbox is a boring product company with not much upside at this point.
I have worked with a few developers who went on to work for FAANG companies. They were OK but not the best I have worked with. They did however have an inflated opinion of their own skills. Perhaps that helps when interviewing?
It's interesting that Google was not higher for communication. Aren't they known as the big tech company with the "writing culture"? I guess 'communication' means something different here.
How is it that Amazon was better than both Facebook and Google overall, but worse when comparing Technical, Problem Solving and Communication. What are the other undisclosed ratings that makes Amazon better overall?
Sometimes I look at the credits lists for very impressive, native 90s desktop programs, that were made in a reasonable amount of time by a small group of people who, at least to some degree, had worse tools than we do now, for a smaller market and lower pay. Usually to be shipped on CD, so they even had to mostly work and not have too many bugs at launch, on top of that. And I wonder WTF happened to our industry and what the armies of programmers at all these companies are doing.
Rather than showing the top ten for statistical significance, wouldn't it make more sense to use PCA on the ratings and show each component's top 10 instead?
My conclusion so that Dropbox has a lot of smart people who want to leave. Higher numbers of interviewees means more folks want to leave and are practicing to do so.
They used to be the best but then ran their client software into the ground. Weirdly the Linux client is still the old school one and since I only use Linux it's fine for me.
There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer. It makes no effort to show where these engineers are going, what the interview process they're completing looks like, what impact demographics had on this, etc.
I find this stuff repugnant. It perpetuates the myth that there's something really special about Silicon Valley engineers, while making only lazy and perfunctory efforts to examine any alternative explanations than "this is where the rockstar ninja coders work." Shameful.