by michael_mroczka on 1/31/24, 5:35 PM with 474 comments
by jhawk28 on 2/1/24, 3:30 AM
by babyshake on 1/31/24, 8:41 PM
Allow them to use the tools, with a screenshare, and adjust the types of tasks you are giving them so that they won't be able to just feed the question to the LLM to give them the completed answer.
Interviews should be consistent with what day to day work actually looks like, which today means constantly using LLMs in some form or another.
by svilen_dobrev on 2/1/24, 9:18 AM
While, wait, thinking...
So all this "cheating" is maybe a (bit delayed) response to the above trend..
by rvz on 1/31/24, 9:30 PM
Why waste each other's time in the interview when I (if I was the interviewer) can just ask for relevant projects or commits on GitHub of a major open source project and that eliminates the 90% of candidates in the pool.
I don't need to test you if you have already made significant contributions in the open. Easy assumptions can be made with the very least:
* Has knowledge of Git.
* Knows how to code in X language in a large project.
* Has done code reviews on other people's code.
* Is able to maintain a sophisticated project with external contributors.
Everything else beyond that is secondary or optional and it's a very efficient evaluation and hard to fake.
When there are too many candidates in the pipeline, Leetcoding them all is a waste of everyone's time. Overall leetcode optimizes to be gamed and is now a solved problem by ChatGPT.
by tetha on 1/31/24, 9:05 PM
The latter might look like you could fake it with ChatGPT, but it'd be hard. For example, some time ago I was interviewing an admin with a bit of a monitoring focus and.. it's hard to replicate the amount of trust I gained to the guy when he was like "Oh yeah, munin was ugly AF but it worked.. well. Now we have better tech".
I guess that's consistent with the article?
by SheinhardtWigCo on 2/1/24, 12:42 AM
> Cheetah is an AI-powered macOS app designed to assist users during remote software engineering interviews by providing real-time, discreet coaching and live coding platform integration.
by m1el on 1/31/24, 8:46 PM
Of course this will change in the future, with more interactive models, but people who use ChatGPT on the interviews make a disservice to themselves and to the interviewer.
Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you? Why would I recommend you as a candidate for a position?
by michael_mroczka on 1/31/24, 8:47 PM
by edanm on 2/1/24, 8:26 AM
Some interviewers wouldn't mind, or would even encourage, using all available tools to solve problems.
by gregors on 2/1/24, 2:11 PM
by nottorp on 1/31/24, 9:10 PM
It works fine for stuff like "give me a tutorial on how to initialize $THING and talk to it" or "how do i set $SPECIFIC_PARAMETER for $THING up".
Where it seems to fail is when you ask "how do i set $X" and the answer is "you can't set $X from code". I got some pretty hallucinations there. At least from the free ChatGPT.
So maybe add a trick question where the answer is "it can't be done"? If you get hallucinations back, it should be clear what is up.
Edit: not that I'm a fan of leetcode interviews. But then to get a government job in medieval China you had to be able to write essays based on Confucius. Seems similar to me.
by OnionBlender on 1/31/24, 9:00 PM
I would still need to get good at leetcode, just not _as_ good.
by tnel77 on 2/1/24, 3:36 PM
1) A couple basic coding questions. FizzBuzz and such. Can they actually solve something basic?
2) Do a real code review with this person. Share your screen and let them review code. Observe what questions they ask and the comments they leave for the author.
3) Ask some design questions. Digging in on how they would design the classes for some new product and purposely throwing a twist in there from time to time. How do they handle this new information and adapt their design? Do they take constructive criticism well?
4) Talking to this person. Are they polite and respectful? You can help someone grow as an engineer, but good luck getting them to be a better coworker if they are rude.
by rbut on 2/1/24, 1:16 AM
by justin_oaks on 1/31/24, 11:42 PM
Removing interviewees because they don't follow directions seems like a good strategy. And I mean removing them as job candidates, not removing them from the study.
It's good to have something in the interview process used explicitly for weeding out people who don't follow directions. Something like "Email us your application, and put the word 'eggplant' somewhere in the subject line. We use it to filter out spam." And then literally delete any subjects that don't have "eggplant" in them.
by nostromo on 1/31/24, 8:47 PM
I worry though that it'll just be the end of online leetcode interviews and employers will bring people back into the office to interview.
by User23 on 2/1/24, 12:55 AM
by andreagrandi on 2/1/24, 9:50 AM
GPT is a tool which can legitimately be used to do your job.
There are so many things that GPT can't do: take decisions, find the best approach to talk with a human being, resolve conflicts between two members of the team, and last but not least explain why of a certain solution.
Is it a coding test? Pair with the candidate. See how they think. Ask yourself: would I enjoy working with this person?
And make your own decision.
by bluedino on 1/31/24, 8:45 PM
by TrackerFF on 2/1/24, 2:51 PM
Companies need to start conducting interviews where tools are available.
When everyone is cheating, no-one is cheating. Trying to customize your questions is just a race to the bottom, and will always be an arms race against the LLMs.
So, instead, let the candidate use whatever tools they want - in the open, and rather probe them on their thought process.
by qweqwe14 on 1/31/24, 8:58 PM
Not to mention the fact that some interviewers feel obliged to ask useless cliche questions like "why do you think you are a good fit for this position" yada yada.
Not going to be surprised if picking people based on random chance (if they meet basic requirements) is going to actually be better statistically than bombarding them with questions trying to determine if they are good enough. Really feels like we are trying to find a pattern in randomness at that point.
Bottom line is that if ChatGPT is actually a problem for the interview process, then the process is just broken.
by locallost on 1/31/24, 9:04 PM
by kmoser on 2/1/24, 5:45 AM
Interviewer: "Create a coding challenge that requires sorting a CSV file by timestamp. Require the timestamps to be in some weird, nonstandard format and describe the format. Provide a few entries in the sample data that contain a timestamp which is ambiguous."
by bspates on 2/1/24, 1:39 AM
by p0w3n3d on 1/31/24, 9:54 PM
by glonq on 2/1/24, 8:06 PM
by aryehof on 2/1/24, 5:46 AM
by jawr on 1/31/24, 10:46 PM
by lkdfjlkdfjlg on 1/31/24, 9:21 PM
by ngneer on 1/31/24, 10:54 PM
by hijinks on 1/31/24, 8:50 PM
You can 100% tell when someone is reading off a screen and not looking at you during an interview via webcam
by willcipriano on 1/31/24, 11:26 PM
by megamix on 2/1/24, 9:26 AM
by jgilias on 2/1/24, 9:21 AM
The negative assessment of one of the interviewers about a candidate how “he hadn’t prepared to solve even the most basic LeetCode problems” is especially telling.
Maybe the candidate had really honed their sudoku solving skills instead.
by mirekrusin on 2/1/24, 9:12 AM
ChatGPT (or local/hosted LLMs) should be tools available at workplace nowadays.
Interview while using LLMs, wikipedia, google, SO, o'reilly or whatnot should be not only allowed but encouraged.
Just have conversation/pair programming like session with gpts open and shared - just how you'd work with that person.
That's how they'll work for/with you.
Mission. Fucking. Accomplished. [0]
by nine_zeros on 1/31/24, 8:55 PM
by andrewstuart on 1/31/24, 9:50 PM
- Using an IDE is not cheating.
- Using StackOverflow is not cheating.
- Reading the documentation is not cheating.
I would expect candidates for programming jobs to demonstrate first class ChatGPT or other code copilot skills.
I would also expect them to be skilled in using their choice of IDE.
I would expect them to know how to use Google and StackOverflow for problem solving.
I would expect programmers applying for jobs to use every tool at their disposal to get the job done.
If you come to an interview without any AI coding skills you would certainly be marked down.
And if I gave you some sort of skills test, then I would expect you to use all of your strongest tools to get the best result you can.
When someone is interviewed for a job, the idea is to work out how they would go doing the job, and doing the job of programming means using AI copilots, IDEs, StackOverflow, Google, github, documentation, with the goal being to write code that builds stuff.
Its ridiculous to demonise certain tools for what reason - prejudice? Fear? Lack of understanding?
There's this idea that when you assess programmers in a job interview they should be assessed whilst stripped of their knowledge tools - absolute bunk. If your recruiting process trips candidates of knowledge tools then you're holding it wrong.
by p0w3n3d on 1/31/24, 9:57 PM
by pcthrowaway on 2/1/24, 5:34 AM
For senior+ candidates I honestly think the correct approach is to just lean into it though.
Encourage them to use ChatGPT at the outset, and select questions that you've already fed to the prompt. When you ask them the question, you can show them ChatGPT's output on a screenshare. The candidate can then talk you through what they like about the answer, as well as where it falls short.
A senior-level developer should almost always be capable of improving on any response given by ChatGPT, even when it gives a good solution.
And if they're not able to give better output than the current AI tooling, it's a pretty good signal that you'd be better off just using LLMs yourself instead of hiring them.