from Hacker News

Ask HN: What are your thoughts on ChatGPT as a professional?

by demyinn on 2/1/24, 6:51 AM with 63 comments

I’m going a new graduate starting an internship. It’s imperative that I do well to receive a return offer. In job search, I’ve built projects with Chat GPTs help. I didn’t tell gpt to just build me a project and I copy and paste but I genuinely use it as a “Jarvis” like Tony stark. That said, is there any stigma with using ChatGPT in the workplace? I don’t like to use gpt to code for me. I like to use gpt as an imaginary person who collaborates on projects. For example, I ask questions like, “I want to implement feature x. It should be used like this… my plan to implement this is to do a, b, and c. Is my structure a good and professional solution to this implementation?”

Usually, I get a really helpful and insightful response that optimizes my design. Sometimes I come up with a solution that might work, but gpt helps me come up with a better solution. This ends up teaching me so much because I actually absorb the information. Moreover, it helps me debug too. I find it as a shortcut to debugging. I see it as, I can spend 2-3 hours googling stuff, or I can spend about 45 minutes talking to gpt to pin point the bug and come up with a viable solution. All that said, what are your thoughts? Am I too reliant on it? Is this a “healthy” relationship with AI? I want to be a strong engineer. I don’t want to pick up sloppy habits and become a poor engineer. My perception of gpt is it’s a new resource to take advantage of. I need some input. What are your thoughts? Thanks!

  • by DrAwdeOccarim on 2/1/24, 10:55 AM

    I work in biotech and at my company, specifically our CEO, for over a decade has constantly asked all of us, "How are you leveraging AI in your job?". So much so that we used to joke about it. Well, when ChatGPT arrived, something happened. Everyone at the company (now about 5000 people) were already primed, looking for how they could finally answer "Yes" to our obsessive CEO. We have a deep roster of professional software engineers, and one dev in particular hacked out an internal site that leveraged the OpenAI API and deployed it company wide. Last count, a majority of the company uses it daily. What my team specifically has found is that building custom GPTs and working out detailed, two-way prompts, has allowed us (and specifically my part-time employee) to deliver detailed technical reports commensurate to an FTE. The key is to force the output to "show it's work" and to expect to review similar to how you would work with a first year graduate student. If you do the math, this puts the company above it's FTE cost in terms of per employee output. That's the real ticket right now. It's not inventing the next drug, but it sure as hell is making me more productive without causing burn-out. It works really well also at meeting minutes, a huge source of time-waste for project managers who should be doing high-thought work instead of menial tasks.

    So to answer your question: become reliant on it. This is the future of white-collar work. Local instances are catching up to the big ones. It's only going to get better!

  • by NatSydenham on 2/1/24, 10:20 AM

    I find it's great for very generic work, like boilerplate or where you have forgotten how to implement a specific single, well defined algorithm. However, relying on its output is very risky if you aren't verifying everything it puts out. A good test to show this is to ask it to write a short essay on something you know very well, then behold all the incorrect information it enthusiastically tries to feed you.

    Also:

    * It has a horrible habit of inventing properties on objects or methods in libraries.

    * It will very happily straight up lie to you about things it does

    * Often when you ask it to make a specific change, it will give you back exactly the same as last time.

    * For the love of God, don't put any company-owned code into it.

    * Maybe I'm just bad at prompting...

  • by ljm on 2/1/24, 10:30 AM

    I find it useful to handle things like creating diagrams and describing architecture, since I have some trouble converting what’s in my head to diagrams on paper. I can verify if the output matches my description of the requirements and make any surgical changes to the UML by hand.

    I don’t use it for programming though, and I do worry about otherwise talented engineers using it as a crutch. There has been more than one occasion in code review where an answer to one of my comments was “well, this is what copilot said…,” which I take a pretty dim view of. In that sense it’s like the early 2000s when people started thoughtlessly deferring their intelligence to sat nav.

  • by nicbou on 2/1/24, 9:11 AM

    It’s a tool for personal productivity, just like spreadsheets and python scripts. I use it as a super dictionary, to write text and code or just satisfy my curiosity. It’s useful when I am knowledgeable enough to verify the output. It’s a technology that augments my abilities, which is what technology should do.

    But I never let it speak for me, and I find it rude to make someone take 5 minutes to read something I “wrote” in 5 seconds. Technology should make us better humans. It should amplify our best traits, and our ability for bullshitting isn’t one of them.

  • by GianFabien on 2/1/24, 7:34 AM

    As a recent graduate (in SE/CS I presume) you should have better quality knowledge than exhibited by the volumes of tutorial materials that ChatGPT was trained on.

    I have decades of experience as a SE and taught postgrad SE courses at uni. And I have experimented with ChatGPT, BING. With well thought out questions, LLMs return better answers than Google, etc searches. But not always. Distrust and Verify!

    Somebody described LLMs as "Endlessly Enthusiastic Savant Intern". Just don't try to get it to do high-level systems design or to understand business domains, requirements, etc. If you treat ChatGPT as a gofer to research information, then you can certainly save time. But you do need to apply your judgement and knowledge to ensure that what you produce is appropriate and functions correctly.

    In due course, you will gain experience and expand your knowledge horizons with your professional experience. By all means, continue to read widely and in-depth. Never forget that ChatGPT, etc are just one of many tools that you use.

    In my father's day, proficient use of the slide-rule was mandatory. For me electronic calculators and computers were the tools of trade. For the emerging generation of engineers, AI is the contemporary tool. We are yet to glimpse what will be next. However, foundational knowledge in your discipline with on-going learning remains the essential super-power.

  • by wslh on 2/1/24, 10:38 AM

    I use ChatGPT everyday and I am sure it is a great, novel, and revolutionary tool. I said I am sure because I can measure how ChatGPT inproves my daily work. I don't use it for software development because I am in a management role now.

    The first insight is that an article that I need to publish could be proofread immediately while before I used an incredible professional content editor that could take hours for reviewing and improving the article. The editor continues to work for us though ;-). In general, the ChatGPT answers should not be copy and pasted but be merged with your content.

    Another insight is that ChatGPT is gradually replacing or being a complement to Google Search. Google Search is terrible nowadays and simple questions in ChatGPT are a good way to start a search in a domain that you are not expert at. It needs a dialectic approach but works.

    I would say garbage-in / garbage-out: if you interact with original questions the tool could be more insightful but be always critical of the output.

  • by gmerc on 2/1/24, 9:46 AM

    It’s a tech demo, data acquisition and research/prototyping operation for OpenAI that’s highly subsidized and aimed at building a qualify dataset for GPT 5+ - effectively using their initial models trained in an unsustainable fashion and only offering temporary moats into investor narrative and a combination of new data and users, both potentially new moats and closing the gap to google.

    It’s not a serious product.

  • by bhaney on 2/1/24, 7:29 AM

    > is there any stigma with using ChatGPT in the workplace?

    Any? Sure. At the organization level, there are workplaces that completely ban ChatGPT and will fire you for using it, and there are workplaces that buy all their engineers a subscription to it and actively encourage using it. At the individual level, there are people who make heavy use of it, people who can't stand it, and (most commonly, imo) people who have nothing against it but don't personally find it very useful.

    Don't use it if your company bans it, but otherwise do whatever you want and maybe check with your coworkers about how they feel towards it before revealing to them that you're using it, just to be safe.

    > Am I too reliant on it? Is this a “healthy” relationship with AI?

    Based on what you've said, your relationship with it seems perfectly fine. The concern with coding AIs is that novice programmers will encounter a problem they can't solve on their own, ask an AI to solve it for them, and copy and paste the output without understanding the problem or the solution. In the best case, the code can work but have subtle flaws that won't show up until later. In the worst case, the code doesn't even make sense and you annoy your peers by making them review zero-effort garbage.

    But if you're just using it to further explore the solution-space of a problem that you already know how to solve in at least one sensible way, there's no danger there and you're just learning and improving. Just be sure to fully understand any AI solution you encounter before internalizing it as a lesson.

    If we froze all coding AIs right now and prevented them from continuing to improve, I'd think you'd find them becoming less and less useful to you as you gain experience. Obviously, they do much better solving common problems with lots and lots of public solutions already available, and more senior engineers don't tend to need help with those or have much to learn about them. That being said, I know plenty of very capable senior engineers who use AIs to generate repetitive boilerplate (often tests) in verbose/inexpressive languages like Go. Anyway, depending on how quickly these AIs continue to improve, you may never get the chance to "grow out of them" and they might be able to keep teaching you increasingly complex things until they finally replace all of us. Who knows!

  • by gherkinnn on 2/1/24, 11:07 AM

    No stigma. If it helps, use it. Don't discard a useful tool but know its limits. You are not paid to carry around trivia to be used in a vacuum. You are paid to solve problems that sometimes involve programming.

    It serves me well for micro-programming. So problems between a single line of code and a function. Here I can control for all its weirdness and hallucinations. As soon as I need to compose these elements I found it somewhere between neutral and a nuisance. High-level changes across a domain or further are impossible.

    Understanding problems inside-out or from first principles has been vastly more helpful than blindly copying around libraries/SO answers/Copilot ideas.

    That said, I have been in this industry for over a decade, am jaded and bitter and up to my armpits in Stockholm Syndrome, and have learned how to keep computers happy. Starting out I would have loved Copilot and it would have absolutely made a huge difference.

  • by sirwhinesalot on 2/1/24, 10:23 AM

    No stigma, just don't get too reliant on it, some of the stuff it spits out is terrible or entirely made up gibberish that doesn't even work.

    Use it to save time. Stuff like writing boilerplate, documentation drafts, summarizing things, looking up stuff you'd normally use stack overflow for, things like that.

  • by satisfice on 2/1/24, 12:57 PM

    Yes, there is a stigma.

    No one who has pride in his work relies on LLMs to do any lifting.

    Perhaps someday LLMs will not be so unreliable. But at the moment they cannot be trusted to do anything important. Or let me put it this way, if you trust them, that trust cannot possibly be justified based on actual careful testing— because you don’t have the time for that testing, you don’t know how to do that testing, whatever testing you did do cannot be generalized to your next project, whatever testing you did do is obsolete, and finally, because if you did the testing you’d see that ChatGPT is unreliable. My opinion comes from fairly extensive, unpaid testing of ChatGPT (plus some dabbling with its competitors).

    Treat an LLM as if it were a child, not like it’s a co-worker.

  • by bigpeopleareold on 2/1/24, 10:37 AM

    I see plenty of people at work use it. I don't, because I feel like understanding a topic takes time. However, this applies to the work I do. Having an instant boilerplate generator is nice, I guess.

    I think if anything, you'll be spending a lot of time code reviewing AI output. Reading code and thinking about if its failure cases is a good thing. Having to constantly do this could make you sharper at that. (EDIT:) Also to clarify that you are still the one that has to see the forest from the trees with the problems you are working, so the "how things fit together" is still a skill to develop.

  • by ActionHank on 2/1/24, 10:45 AM

    I jump around languages and solutions a fair bit so I use it in place of Google + API docs. "I need to do X with this language, in this stack, what APIs are available?". I find I get better results than Google and some of the more wordy, dispersed docs (.net, anything Apple).

    I have tried copilot in my IDE, but I just don't see the value. About 50% of the time the suggestions are not useful or correct, 50% of the time the suggestions are what I was typing which is exactly what code completion does.

  • by jmfldn on 2/1/24, 10:50 AM

    Worse than useless for non-trivial tasks. Great for simple code / scripting questions where I would have used Google in the past. It's basically by Bash script assistant now.
  • by Mvandenbergh on 2/1/24, 10:57 AM

    We (an engineering consultancy, so not software but physical infrastructure) have an internal ChatGPT so that people can use it for work.

    I find that it is quite good at answering textbook type questions and giving background on things. Basically a kind of supercharged search engine.

    So for example, if I knew nothing about water treatment technology, I could ask it "what are the typical stages of water treatment for groundwater from a borehole?" and it will give me a good answer. Sometimes when you ask more specific questions, it will give some weird answers. It was convinced that desalination was already the main source of drinking water in a particular country, probably because there have been a lot of new desal schemes proposed and so the input corpus has a lot of associations between that country and desalination.

    I just asked it a question on whether you could use nitrogen to cool a nuclear reactor (correct answer is yes, if it's enriched heavy nitrogen) and it gave an ok answer but didn't mention that N14 absorbs neutrons. This is pretty obvious but there is very little written about this idea in the likely input corpus so ChatGPT doesn't know this.

    It cannot (and in fact, standard LLMs cannot because of how they architected) answer questions that require constructing counterfactuals or hypotheticals outside their input base. To note, from the point of view of an LLM, something which is covered in its input corpus is neither a counterfactual nor a hypothetical even if they are relative to the real world. So if nobody has ever written anything on the use of a particular technology for a particular purpose, no pure LLM will be able to answer questions relating to it. Other emerging AI technologies look like they will be able to do that though.

  • by dns_snek on 2/2/24, 2:42 AM

    > I find it as a shortcut to debugging. I see it as, I can spend 2-3 hours googling stuff, or I can spend about 45 minutes talking to gpt to pin point the bug and come up with a viable solution.

    This raises some questions for me. When I say I'm "debugging", I'm usually stepping through the execution of my program and constantly analyzing how results that my program is producing differ from my expectations.

    I don't know anything about your bugs to be able make any general claims, but are you sure that your 2-3h of "googling stuff" that were reduced to 45 minutes of "talking to gpt" couldn't be reduced to 5-30 minutes of using a real debugger to find the error?

    It takes a lot of practice to develop debugging skills that allow you to efficiently and methodically track down most common bugs. Delegating the thinking part of that process to a LLM sounds like a crutch to me.

  • by dusted on 2/1/24, 10:51 AM

    I think it's relatively useful when you don't take the approach of blindly trusting it or expecting it to be able to do stuff it can't do.

    I'm getting the most out of it when I'm doing stuff so new to me that I don't have the right words to even search on google, there it's very useful to try and ask it questions the same way I'd ask a (non-asshole) human, and rather than (intentionally misunderstanding my by taking every word literally and assuming I know what it means), it will try and explain what it understood form what I said..

    Non-technical example where I found it easier to just ask chatgpt is "What's that spice that some people really hate and says tastes like soap, and others don't really mind?"

    "exactly, what's it called in danish"

    very natural exchange and gave the right answer

  • by davedx on 2/1/24, 10:53 AM

    I think it depends where you end up, but where I currently work we almost mandated that devs at least get familiar with how to use it, if not necessarily use it in everyday work. It's just too useful and valuable to omit.

    What I've also found is it's incredibly useful for domain knowledge too. I've been working on a project around electrical grids, and when I had to go deep into how AC grid electrical engineering stuff, it was very very helpful. It's useful for business related stuff too.

    At the end of the day, I think it's a powerful tool that shouldn't become a crutch, and any company saying otherwise is sabotaging themselves. But that's not to say they wouldn't be a good place to work for, companies sabotage themselves in all sorts of ways :)

  • by nunez on 2/1/24, 2:57 PM

    i think using assistive tools like chatgpt are helpful _after_ you put in the work to learn the fundamentals. tools like this can be a crutch that inhibits your ability to think bigger and expand your horizons.i would also recommend learning how to use search engines more effectively to avoid being supplied only one "side" of an answer. stackoverflow is a great example of this; i've found that the second or third answer is (somehow) usually the best one.

    that said, i dont use gpt or llm's, so take my opinion for the two milli-cents that its worth!

  • by 2d8a875f-39a2-4 on 2/1/24, 10:57 AM

    I guess you mean code-generating LLMs in general, not specifically ChatGPT.

    For programmers I think they're valuable tools and productivity enhancers. Knowing how to use one effectively to boost your productivity is important.

    The "[thing] will kill software engineering" narrative is one we hear every time something new boosts productivity by hiding abstractions - high level programming languages, IDEs, garbage collection, etc. The end result is never less programmers, it's more code. More stuff gets made that otherwise would not be. Same will be true for LLMs.

  • by nerdawson on 2/1/24, 10:51 AM

    I use ChatGPT all the time at work.

    I find it particularly useful when I'm responding to a client with something technical. ChatGPT can sharpen up the message, clarify points in a client-friendly way, and hit the right tone.

    Great for throwing around ideas and getting some feedback before diving into the code too.

    I find it's easy to go wrong if you're too trusting of the output though. For important questions, I'll get some initial guidance from ChatGPT to refine my research rather than outright accept any answer it gives.

  • by firtoz on 2/1/24, 10:31 AM

    I find it excellent to talk to it to learn the basics about something I don't have much experience in. Its broad knowledge helps a lot with that. When it comes to finding super accurate, or updated, or state of the art things, it's not great, but it definitely can lead you in the right direction. And this removes so much friction from trying new things.

    Note: I'm a technical freelancer, mostly doing code but also some 3D modeling, game development etc

  • by jaredcwhite on 2/1/24, 6:00 PM

    > My perception of gpt is it’s a new resource to take advantage of.

    And yet other perceptions (my own for example) is that this technology is fundamentally unethical and should be used with extreme caution, if at all (I don't as a matter of principle).

    Yes, there will be a stigma using this in some circles. You should look into the polices of whatever orgs you're involved with. Some won't have an issue with it, but others will.

  • by starbugs on 2/1/24, 10:55 AM

    It appears to be good enough to make you believe its answers are trustworthy. It's bad enough that you should not trust it with anything it says and doublecheck everything that it produces.

    It's great for getting started quick, but don't fall into the trap of believing it could be a viable replacement for a human expert or a reviewed reference.

    To be honest, I tend to believe now we're a long way off from an AI capable of that (again).

  • by nathell on 2/1/24, 11:14 AM

    It’s a very fancy autocomplete, and I treat it as such. It can be a very useful tool, but I tend to take a conservative stance and stay clear of the hype.
  • by yodsanklai on 2/1/24, 10:12 AM

    > is there any stigma with using ChatGPT in the workplace?

    I work in a big tech company. ChatGPT (and equivalent systems) is becoming pervasive in developers tooling. There's no stigma, it's used is actually encouraged.

    I don't think it's a life changer or that it'll save engineers the time to learn their craft.

  • by michaelsalim on 2/1/24, 10:42 AM

    It's really still too early to say. Everyone has widely different opinions about it. You'll find plenty of people advocating for it and plenty against it. I say that if you're able to explain it to people in a similar way to how you've explained here, most will be understanding.
  • by esperent on 2/1/24, 9:57 AM

    LLMs are too new, and too much in flux, to answer this question well. I can give an answer now, and in two months it might be obsolete. So all I can do is say something generic like:

    It's a productivity tool that will find it's place alongside similar tools like intellisense and linting. Just like those tools it will change the way people code - they will need to hold less info in their mind, and that can potentially allow them to reach greater heights, or learn faster. Or, it could potentiate laziness instead. That's down to the individual (and applies to more than just coding, LLMs are equally used in fields like art, translation, marketing, copyrighting etc.).

    As for my personal usage. Here's a couple of thoughts that might be obsolete in two months:

    * It's awesome for looking up basic algorithms like sorts

    * It's pretty great for doing simple things with well established code bases (e.g. React)

    * It's even better for doing those things if you are already an expert so that you know how to guide it

    * It's pretty terrible for doing anything with more obscure/recently released code bases, whether you are an expert or not

    * I have heard it suggested this will make people want to avoid working with less well known libraries. This remains to be seen I guess

    Thoughts on what this means in a "professional" career:

    1. If your professional career brings you down well trodden paths (e.g. writing React apps) it will probably be a big part of your work

    2. If your professional career leads you to working on obscure systems, or even writing those systems, it won't be as useful for you except as a reference for algorithms, or for writing comments (this is where I have been recently).

    However, if someone can figure out how to retrain the models constantly on new info so that even the newest of releases becomes a part of the training data just as fast as they would show up on a Google search, that will change things a lot. Likewise, if it can be trained on a your own personal obscure codebase (I think this might already be possible?) that would be a big deal.

    Finally, as for stigma. I don't think so. There are privacy issues but these can be worked around by running a local LLM and these are getting better and better. If you have a graphics card with 16gb of RAM I think there's already models that you can run locally that are similar to GPT3.5 performance.

  • by danielbarla on 2/1/24, 10:32 AM

    I find it excels as a "rubber ducking" partner, as a replacement for "tldr" on esoteric utilities whose documentation is either lacking or overly verbose, and most especially as a non-technical tool for generating concepts you may not have thought of (e.g. "what aspects should I be considering when designing X").

    Most of these use cases work around the "problem" of hallucination (except perhaps the 2nd one); it's ideating for you and you judge what's useful or not. As such, it's one more productivity tool I feel people should learn to use, with the relevant understanding and care.

  • by j7ake on 2/1/24, 11:06 AM

    I found three things very useful about ChatGPT:

    (1) A Google Search replacement.

    (2) An excellent text summarizer.

    (3) A quick prototyper (code and math).

  • by apapapa on 2/1/24, 6:59 AM

    ChatGPT will become irrelevant somewhat soon (at this rate). Warning; not a professional opinion.
  • by joshxyz on 2/1/24, 10:29 AM

    like all good things, just use it in moderation.

    you dont use a hammer for all carpentry work.

  • by richardw on 2/1/24, 10:50 AM

    Use it. It's going to sometimes accelerate your understanding hugely because you can interrogate it about a new area. It's also going to lie to you, quite often, and give you rubbish code / suggestions. Learn to figure out where it's good and where it's rubbish.

    Ask it to explain every line of something, after the fact. Ask it for better ways of doing things. Ask it how you can make code more robust, what and how you can test it. Be better than your peers at using it. The world will only increase usage of this new thing so why not just be really good at it.

    In the grand scheme of things it's about the value that you add, and some of that will be getting into a new problem fast and sometimes it'll be your understanding of fundamentals. Do both, use whatever tools you need. Buy books, read open source code, read blogs, use AI.

  • by coolThingsFirst on 2/1/24, 10:41 AM

    It makes mistakes all the time even for simple tasks.