from Hacker News

Anthropic: "Applicants should not use AI assistants"

by twapi on 2/3/25, 7:46 AM with 401 comments

  • by radu_floricica on 2/3/25, 10:11 AM

    I'll be the contrarian and say that I don't find anything wrong with this, and if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.

    sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"

    I think that's a bit like lying on your first date. If you're looking to score, sure, it's somewhat unethical but it works. But if you're looking for a long term collaboration, _and_ you expect to be interviewed by several rounds of very smart people, then you're much better off just going along.

  • by sshine on 2/3/25, 8:53 AM

    > please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.

    There are two backwards things with this:

    1) You can't ask people to not use AI when careful, responsible use is undetectable.

    It just isn't a realistic request. You'll have great replies without AI use and great replies with AI use, and you won't be able to tell whether a great reply used AI or not. You will just be able to filter sludge and dyslexia.

    2) This is still the "AI is cheating" approach, and I had hoped Anthropic to be thought leaders on responsible AI use:

    In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

    If AI is making your final product and you're none the wiser, it didn't really help you, it just made you addicted to it.

    Teach a man to fish...

  • by fancyfredbot on 2/3/25, 8:58 AM

    If I want to assess a candidates performance when they can't use AI then I think I'd sit in a room with them and talk to them.

    If I ask people not to use AI on a task where using AI is advantageous and undetectable then I'm going to discriminate against honest people.

  • by ben30 on 2/3/25, 9:36 AM

    This application requirement really bothered me as someone who's autistic and dyslexic. I think visually, and while I have valid ideas and unique perspectives, I sometimes struggle to convert my visual thoughts into traditional spoken/written language. AI tools are invaluable to me - they help bridge the gap between my visual thinking and the written expression that's expected in professional settings.

    LLMs are essentially translation tools. I use them to translate my picture-thinking into words, just like others might use spell-checkers or dictation software. They don't change my ideas or insights - they just help me express them in a neurotypical-friendly format.

    The irony here is that Anthropic is developing AI systems supposedly to benefit humanity, yet their application process explicitly excludes people who use AI as an accessibility tool. It's like telling someone they can't use their usual assistive tools during an application process.

    When they say they want to evaluate "non-AI-assisted communication skills," they're essentially saying they want to evaluate my ability to communicate without my accessibility tools. For me, AI-assisted communication is actually a more authentic representation of my thoughts. It's not about gaining an unfair advantage - it's about leveling the playing field so my ideas can be understood by others.

    This seems particularly short-sighted for a company developing AI systems. Shouldn't they want diverse perspectives, including from neurodivergent individuals who might have unique insights into how AI can genuinely help people think and communicate differently?

  • by sigmoid10 on 2/3/25, 8:48 AM

    This is quite a conundrum. These AI companies thrive on the idea that very soon people will not be replaced by AI, but by people who can effectively use AI to be 10x more productive. If AI turns a normal coder into a 10x dev, then why wouldn't you want to see that during an interview? Especially since cheating this whole interview system has become trivial in the past months. It's not the applicants that are the problem, it's the outdated way of doing interviews.
  • by jusomg on 2/3/25, 10:16 AM

    I do lots of technical interviews in Big Tech, and I would be open to candidates using AI tools in the open. I don't know why most companies ban it. IMO we should embrace them, or at least try to and see how it goes (maybe as a pilot program?).

    I believe it won't change the outcomes that much. For example, on coding, an AI can't teach someone to program or reason in the spot, and the purpose of the interview never was to just answer the coding puzzle anyway.

    To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc). If I give you a puzzle and you paste the most optimized answer with no reasoning or comment you're not going to pass the interview, no matter if it's done with AI, from memory or with stack overflow.

    So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.

  • by neilv on 2/3/25, 9:28 AM

    Kudos to Anthropic. The industry has way too many workers rationalizing cheating with AI right now.

    Also, I think that the people who are saying it doesn't matter if they use AI to write their job application might not realize that:

    1. Sometimes, application questions actually do have a point.

    2. Some people can read a lot into what you say, and how you say it.

  • by CaptainFever on 2/3/25, 9:28 AM

    > While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.

    Full quote here; seems like most of the comments here are leaving out the first part.

  • by _heimdall on 2/3/25, 12:14 PM

    The irony here is obvious, but what's interesting is that Anthropic is basically asking to not give then a realistic preview of how you will work.

    This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.

    If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.

  • by Daub on 2/3/25, 10:42 AM

    Half way through a recent interview it became very apparent that the candidate was using AI. This was only apparent in the standard 'why are you interested in working here?' Questions. Once the questions became more AI resistant the candidate floundered. There English language skills and there general reasoning declined catastrophically. These question had originally been introduced to see see how good the candidate was at thinking abstractly. Example: 'what is your creative philosophy?'
  • by firtoz on 2/3/25, 8:48 AM

    It's also a personal question, not a "why should someone work here", but a "what motivates YOU"
  • by nialse on 2/3/25, 8:52 AM

    It makes sense. Having the right people with the right merits and motivations will become even more important in the age of AI. Why you might ask? Execution is nothing when AI matures. Grasping the big picture, communicating effectively and possessing domain knowledge will be key. More roles in cognitive work will become senior positions. Of course you must know how to make the most out of AI, but it is more interesting what skills you bring to the table without it.
  • by npteljes on 2/3/25, 10:28 AM

    Funny on the tin, but it makes complete sense to me. A sunglasses company would also ask me to take off my sunglasses during the job interview, presumably.
  • by 1GZ0 on 2/3/25, 9:20 AM

    You wouldn't show up drunk to a job interview just because its at brewery, would you?
  • by rcarmo on 2/3/25, 10:07 AM

    Anthropic is kind of positioning themselves as the "we want the cream of the crop" company (Dario himself said as much in his Davos interviews), and what I could understand was that they would a) prefer to pick people they already knew b) didn't really care about recruiting outside the US.

    Maybe I read that wrong, but I suspect they are self-selecting themselves out of some pretty large talent pools, AI or not. But that application note is completely consistent with what they espouse as their core values.

  • by rapidaneurism on 2/3/25, 8:49 AM

    Do they also promise not to use ai to evaluate the answers?
  • by nejsjsjsbsb on 2/3/25, 8:59 AM

    Not new they had that 5 years ago at least.

    Anthropic interview is nebulous. You get a coding interview. Fast paced, little time, 100% pass mark.

    Then they chat to you for half an hour to gauge your ethics. Maybe I was too honest :)

    I'm really bad at the "essay" subjects vsm the "hard" subjects so at that point I was dumped.

  • by gcanyon on 2/3/25, 12:14 PM

    Everyone arguing for LLMs as a corrupting crutch needs to explain why this time is different: why the grammar-checkers-are-crutches, don't-use-wikipedia, spell-check-is-a-crutch, etc. etc. people were all wrong, but this time the tool really is somehow unacceptable.
  • by jonsolo on 2/3/25, 11:46 AM

    The goal of an interview is to assess talent. AI use gets in the way of that. If the goal were only to produce working code, or to write a quality essay, then sure use AI. But arguing that misunderstands the point of the interview process.

    Disclaimer: I work at Anthropic but these views are my own.

  • by andy_ppp on 2/3/25, 10:20 AM

    If your test can be done by an LLM maybe you shouldn't be hiring a human being based on that test...
  • by yapyap on 2/3/25, 12:00 PM

    It’s cause they wanna use the data to train AI on and traininy AI on AI is useless.
  • by aabhay on 2/3/25, 8:52 AM

    It’s always the popular clubs that make the most rules
  • by yosito on 2/3/25, 11:56 AM

    How much you wanna bet they're using AI to evaluate applicants and they don't even have a human reading 99% of the applications they're asking people to write?

    As someone who has recently applied to over 300 jobs, just to get form letter rejections, it's really hard to want to invest my time to hand-write an application that I know isn't even going to be read by a human.

  • by Applejinx on 2/3/25, 10:59 AM

    Maybe they are ahead of the curve at finding that hiring people based on ability to exploit AI-augmented reach produces catastrophically bad results.

    If so, that's bad for their mission and marketing department, but it just puts them in the realm of a tobacco company, which can still be quite profitable so long as they don't offer health care insurance and free cigarettes to their employees :)

    I see no conflict of interest in their reasoning. They're just trying to screen out people who trust their product, presumably because they've had more experience than most with such people. Who would be more likely to attract AI-augmented job applicants and trust their apparent augmented skill than an AI company? They would have far more experience with this than most, because they'd be ground zero for NOT rejecting the idea.

  • by mohsen1 on 2/3/25, 12:35 PM

    Pretty ironic that they use an automated system called CodeSignal that does the first round of the interviews
  • by rixed on 2/3/25, 9:10 AM

    I understand why it's amusing, but there is really nothing to see here. It could be rephrased as:

    « The process we use to asses candidates relies on measuring the candidate's ability to solve trivia problems that can easily be solved by AI (or internet search or impersonation etc). Please refrain from using such tools until the industry come up with a better way to assess candidates. »

    Actually, since the whole point of those many screening levels during hiring is to avoid the cost of having long, in depth discussions between many experts and each individual candidates, probably IA will be the solution that makes the selection process a bit less reliant on trivia quizz (a solution that will, no doubt, come with its own set of new issues).

  • by matsemann on 2/3/25, 9:46 AM

    Relevant (and could probably have been a comment there): https://news.ycombinator.com/item?id=42909166 "Ask HN: What is interviewing like now with everyone using AI?"
  • by surfingdino on 2/3/25, 9:57 AM

    This insistence of using only human intelligence reminds me of the quest for low-background steel.
  • by luke-stanley on 2/3/25, 12:38 PM

    I'm sure Anthropic have too many applications submitted that are obviously AI generated, and I am sure what they mean by "non-AI-assisted communication", they don't want "slop" applications, that sounds like a LLM wrote it. They want some greater proof of human ability. I expect humans at Anthropic can tell what LLM model was used to rewrite (or polish) applications they get, but if they can't, a basic BERT classifier can (I've trained one for this task, it's not so hard).
  • by ReptileMan on 2/3/25, 9:23 AM

    Much better approach is to ask the candidate about the limitations of AI assistants and the rakes you can step on while walking that path. And the rakes you have already stepped on with AI.
  • by Ekaros on 2/3/25, 8:54 AM

    Why aren't they dog fooding? Surely if AIs improve output and performance they should readily accept input from them. Seems like they don't believe in their own products.
  • by hinkley on 2/3/25, 11:05 AM

    Prepping for an interview a couple weeks ago, I grabbed the latest version of IntelliJ. I wanted to set up a blank project with some tests, in case I got stuck and wanted to bail out of whatever app they hit me with and just have unit tests available.

    So lacking any other ideas for a sample project I just started implementing Fizzbuzz. And IntelliJ started auto suggesting the implementation. That seems more problematic than helpful, so it was a good thing I didn’t end up needing it.

  • by Joel_Mckay on 2/3/25, 10:08 AM

    Question 1:

    Write a program that describes the number of SS's in "Slow Mississippi bass". Then multiply the result by hex number A & 2.

    Question 2:

    Do you think your peers will haze you week 1 of the evaluation period? [Yes|No]

    There are a million reasons to exclude people, and most HR people will filter anything odd or extraordinary.

    https://www.youtube.com/watch?v=TRZAJY23xio&t=1765s

    Hardly a new issue, =3

  • by OutOfHere on 2/3/25, 12:17 PM

    Whenever someone asks you to not do something that is victimless, you always should think about the power they are taking away from you, often unfairly. It is often the reason why they have power over you at all. By then doing that very thing, you regain your power, and so you absolutely should do it. I am not asking you to become a criminal, but to never be subservient to a corporation.
  • by autonomousErwin on 2/3/25, 12:12 PM

    This probably means they are completely unable to differentiate between AI and non-AI else they would just discard the AI piles of applications.
  • by TypingOutBugs on 2/3/25, 9:58 AM

    How do you guys do coding assessments nowadays with AI?

    I don’t mind if applicants use it in our tech round but if they do I question them on the generated code and potential performance or design issues (if I spot any) - but not sure if it’s the best approach (I mostly hire SDETs so do a ‘easy’ dev round with a few easy/very easy leet code questions that don’t require prep)

  • by noncoml on 2/3/25, 9:10 AM

    If Alice can do better against Bob when they aren’t using AI, but Bob performs better when both use AI, isn’t it in the company’s best interest to hire Bob, since AI is there to be used during his position duties?

    If graphic design A can design on paper better that B, but B can design on the computer better than A, paper or computer, why would you hire A?

  • by trollbridge on 2/3/25, 12:45 PM

    This strikes me as similar to job applicants who apply for a position and are told it's hybrid or in-office - and then on the day of the interview, it suddenly changes from one in-person to one held over a videoconference, with the other participants with backdrops that look suspiciously like they're working from home.
  • by nunez on 2/3/25, 3:48 PM

    This feels very similar to ophthalmologists who make their money pushing LASIK while refusing to get it done on themselves or their relatives. "This procedure is life-changing! But..."

    Anyway, bring back in-person interviews! That's the only way to work around this Pandora's Box they themselves opened.

  • by sussmannbaka on 2/3/25, 9:02 AM

    slop for thee but not for me
  • by hhthrowaway1230 on 2/3/25, 9:39 AM

    This has a poetic tone to it.

    However, not sure what to think of it. So AI should help people on their job and their interview process, but also not? When it matters? What if you're super good ML/AI, but very bad at doing applications? Would you still have a chance?

    Or do you get filtered out?

  • by DrNosferatu on 2/3/25, 11:55 AM

    Aha: maybe they want to train their AI on their applicant’s / job seekers text submissions :D
  • by farceSpherule on 2/3/25, 12:43 PM

    So I guess people should not use other available tools? Spell checker? Grammar checker? The Internet? Grammarly?

    The issue is that they are receiving excellent responses from everyone and can no longer discriminate against people who are not good at writing.

  • by wosined on 2/3/25, 1:14 PM

    Only if they stop screening us with their shitty AI first. Otherwise it is slop vs slop.
  • by foul on 2/3/25, 9:07 AM

    Don't get high on your own supply, like zuck doing the conquistador in Kaua'i
  • by rock_artist on 2/3/25, 12:27 PM

    So suddenly we're in a state where:

    - AI companies ask candidates to not "eat their own dog food" - AI companies blames each other of "copying" their IP while they find it legit to use "humans" IP for training.

  • by commandersaki on 2/3/25, 11:41 AM

    I'd be fine with this if they agree to not use AI to assess you as a candidate.
  • by Shamalamading on 2/3/25, 11:15 AM

    On the face of it it's a reasonable request but the question itself is pointless. An applicants outside opinion on a company is pretty irrelevant and is subject to a lot of change after starting work.
  • by thih9 on 2/3/25, 12:03 PM

    > We want to understand your personal interest in Anthropic without mediation through an AI system

    Is the application being reviewed with the help of an AI assistant though? If yes, AI mediation is still taking place.

  • by ccourtenay on 2/3/25, 4:17 PM

    You want to work at an AI company that does not allow the use of AI by it's future employees.

    That is likely enough said right there. Keep looking for a company that has it's head screwed on straight.

  • by dhfbshfbu4u3 on 2/3/25, 12:59 PM

    Cool. Does that mean Anthropic is not using ATS to scan resumes?

    Of course it doesn’t…

  • by wosined on 2/3/25, 1:13 PM

    Only if they stop screening us with their shitty AI first.
  • by alex1138 on 2/3/25, 9:17 AM

    I generally trust Anthropic vs others, I think Claude (beyond obligatory censorship) ticks all the right boxes and strikes the right balance
  • by aprilthird2021 on 2/3/25, 8:46 AM

    That's totally reasonable, imo. You also can't look up the answers using a search engine during your application to work at Google
  • by snakeyjake on 2/3/25, 1:13 PM

    At least someone realizes the soulless unimaginative mediocrity machine makes people sound soulless, unimaginative, and mediocre.
  • by ulfw on 2/3/25, 11:51 AM

    Beyond ridiculous. I am lacking words on how stupid this statement is coming from the AI company who enables all this crap.
  • by aw4y on 2/3/25, 8:49 AM

    this remember me an old interview, years ago, when they asked me to code something "without using Google"....
  • by layer8 on 2/3/25, 1:04 PM

    Plot twist: They are actually looking for the freethinkers who are subversive enough to still use AI assistants.
  • by ionwake on 2/3/25, 11:37 AM

    TBH its motivated me to apply with AI to try somehow to get away with it.

    (I need to reevaluate my work load and priorities.)

  • by DrNosferatu on 2/3/25, 11:52 AM

    Funny this massive irony just out, as I don’t think I’ll renew my subscription with them because of R1.
  • by avereveard on 2/3/25, 8:55 AM

    Evaluators neither but here we are
  • by hcarvalhoalves on 2/3/25, 4:41 PM

    That's hilarious. A comedy script couldn't beat real life in 2025.
  • by skc on 2/3/25, 10:59 AM

    AI for thee but not for me?
  • by leCaptain on 2/3/25, 11:12 AM

    this is a reasonable request, provided there is a human on the other side who is going to read the 200-400 word response, and make a judgment call.
  • by dennis_jeeves2 on 2/3/25, 10:48 AM

    >Why do you want to work at Anthropic? (We value this response highly - great answers are often 200-400 words.)

    Low lifes

  • by chuckro84 on 2/3/25, 7:32 PM

    The HR are probably using AI to waste our time with their ridiculously worded job descriptions and now you can have a computer respond... You have simply completed the circle of stupidity. If they are upset you have sidestepped putting yourself inside their circle, maybe there is a better place to work after all...
  • by muhehe on 2/3/25, 8:46 AM

    Seems reasonable.
  • by sd9 on 2/3/25, 8:46 AM

    Good luck with that
  • by OsrsNeedsf2P on 2/3/25, 8:49 AM

    > please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.

    Exact opposite of our application process at my previous company. We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work