by swman on 2/21/24, 10:56 PM with 135 comments
I think for complete beginners or casual programmers, GPT might be mind-blowing and cool because it can create a for loop or recommend some solution to a common problem.
However, for most of my tasks, it usually has ended up being a total waste of time and leads to frustration. Don't get me wrong, it is useful for those basic tasks for which in the past I'd do the google -> Stack Overflow route. However, for anything more complex, it falls flat.
Just a recent example from last week - I was working on some dynamic SQL generation. To be fair, it was a really complex task and it was 5pm so I didn't feel like whiteboarding (when in doubt, always whiteboard and skip gpt lol). I thought I'd turn to GPT and ended up wasting 30 minutes while it kept hallucinating and giving me code that isn't even valid. It skipped some of the requirements and missed things like a GROUP BY which made the generated query not even work. When I told it that it missed this, it regenerated some totally different code that had other issues.. and I stopped.
When chatGPT first came out I was using it all the time. Within a couple weeks though it became obvious it is really limited.
I thought I'd wait a few months and maybe it would get better but it hasn't. Who exactly are copilots for if not beginners? I really don't find them that useful for programming because 80% of the time the solutions are a miss or worse don't even compile or work.
I enjoy using it to write sci fi stories or learn more about some history stuff where it just repeats something it parsed off wikipedia or whatever. For anything serious, I find I don't get that much use out of it. I'm considering canceling my subscription because I think I'll be okay using 3.5 or whatever basic model I'd get.
Am I alone here? Sorry for the ramble. I just feel like I had to put it out there.
by whalesalad on 2/21/24, 11:18 PM
On the GPT-4 side I’ve had great luck with dealing with complex SQL/BigQuery queries. I will explain a problem, offer my schema or a psql trigger and my goals on how to augment it and it’s basically spot on every time. Helps me when I know what I want to do but don’t know precisely how to achieve it.
by lolinder on 2/21/24, 11:26 PM
As a concrete example: GitHub Copilot has been absolutely life-changing for working on hobby programming language projects. Building a parser by hand consists of writing many small, repetitive functions that use a tiny library of helper functions to recursively process tokens. A lot of people end up leaning on parser generators, but I've never found one that isn't both bloated and bad at error handling.
This is where GitHub Copilot comes in—I write the grammar out in a markdown file that I keep open, build the AST data structure, then write the first few rules to give Copilot a bit of context on how I want to use the helper functions I built. From there I can just name functions and run Copilot and it fills in the rest of the parser.
This is just one example of the kind of task that I find GPTs to be very good at—tasks that necessarily have a lot of repetition but don't have a lot of opportunities for abstraction. Another one that is perhaps more common is unit testing—after giving Copilot one example to go off of, it can generate subsequent unit tests just from the name of the test.
Is it essential? No. But it sure saves a lot of typing, and is actually less likely than I am to make a silly mistake in these repetitive cases.
by kromem on 2/22/24, 11:07 AM
I really have zero desire to ever be programming without Copilot again, and have been writing software for over a decade.
It just saves so much time on doing all the boring stuff I wanted to tear my hair out on wondering why I even bother doing this line of work at all.
Yeah, you're right, it's not as good as I am at the complex more abstracted things I actually enjoy solving.
So it does the grunt work like writing the I/O calls, logging statements, and the plethora of "nearly copy/paste but just different enough you need to write things out" parts of the code. And then I do the review of what it wrote and the key parts that I'm not going to entrust to a confabulation engine.
My favorite use is writing unit tests, where if my code is in another tab (and ideally a reference test file from the same package) it gets about 50% there with a full unit test and suddenly my work is no longer writing the boilerplate but just narrowing in the tailored scaffolding to what I actually want it to test.
It's not there to do your job, it's there to make it easier in very specific ways. Asking your screwdriver to be the entire toolbox is always going to be a bad time.
by _fs on 2/21/24, 11:24 PM
I prefer to use it as more of an autocomplete on a line per line basis when writing new code.
Typically, I use it for small and concise chunks of code that I already fully understand, but save me time. Things like "Here's 30 lines of text, give me a regex that will match them all" or "Unroll/rewrite this loop utilizing bit shifting".
I also use copilot as a teacher. Like to quickly grok assembly code or code in languages that I do not use everyday. Or having a back and forth conversation with copilot chat on a specific technology I want to use and don't fully understand. Copilot chat makes an excellent rubber duck when working through issues.
by daymanstep on 2/21/24, 11:13 PM
I asked ChatGPT to find the bug and it didn't find it. I also asked GPT4-Turbo to find the bug and it also couldn't find it. In the end I found the bug manually using tracing prints.
After I found the bug, I wondered if GPT4 could have found it so I gave the buggy code to GPT4 and it found the line with the bug instantly.
To me this shows that GPT4 is much better than GPT4-Turbo and GPT-3.5
by allears on 2/21/24, 11:14 PM
by xpl on 2/21/24, 11:23 PM
The thing is: in software engineering, you're very often "a beginner" when using new technology or operating outside your familiar domain. In fact, you need to learn constantly just to stay in the business.
by diegoop on 2/22/24, 1:43 PM
- For basic autocompletion is ok, on lazy days I even find myself often thinking "why the ai is not proposing a solution to this stupid method yet?".
- For complicated coding stuff is worthless, I've lost a lot of time trying to fix some ai generated code to end up writing the stuff again, so I rely on google/stackoverflow for that kind of research.
- For architectural solutions or some research like looking for the right tool to do something I found it quite useful as it often present options I didn't consider or didn't know in the first place, so I can take them also in consideration.
by thot_experiment on 2/21/24, 11:29 PM
It's also so helpful to be able to just ask questions of the documentation on popular projects, whether it be some nuance of the node APIs or a C websockets library, it saves me countless hours of searching and reading through documentation. Just being able to describe what I want and have it suggest some functions to paste into the actual documentation search bar is invaluable.
Similarly I find it's really helpful when trying to prototype things, the other day I needed to drop an image into a canvas. I don't remember off top exactly how to get a blob out of an .ondrop (or whatever the actual handler is) and I could find it with a couple minutes of google and MDN/SO, but if I ask ChatGPT "write me a minimal example for loading a dropped image into a canvas" I get the exact thing I want in 10 seconds and I can just copy paste the relevant stuff into MDN if I need to understand how the actual API works.
I think you're just using it wrong, and moreover I think it's MUCH MUCH more useful as an experienced engineer than as a beginner. I think I get way more mileage out of it than some of my more junior friends/colleagues because I have a better grasp on what questions to ask, and I can spot it being incorrect more readily. It feels BAD to be honest, like it's further stratifying the space by giving me a tool that puts a huge multiplier on my experience allowing me to work much faster than before and leaving those who are less experienced even further behind. I fear that those entering the space now, working with ChatGPT will learn less of the fundamentals that allow me to leverage it so effectively, and their growth will be slowed.
That's not to say it can't be an incredibly powerful learning tool for someone dedicated to that goal, but I have some fear that it will result in less learning "through osmosis" because junior devs won't be forced into as much of the same problem solving I had to do to be good enough, and perhaps this will allow them to coast longer in mediocrity?
by ado__dev on 2/21/24, 11:19 PM
I've been writing code for close to 20 years now across the full stack, I have written a lot of bad code in my life, I have seen frameworks come and go, so spotting bad code or spotting bad practices is almost second nature to me. With that said, using Cody, I'm able to ship much faster. It will sometimes return bad answers, i may need to tweak my question, and sometimes it just doesn't capture the right context for what I'm trying to do, but overall it's been a great help and I'd say has made me 35-40% more efficient.
(Disclaimer: I work for Sourcegraph)
by MattGaiser on 2/21/24, 11:16 PM
I don’t find them that great at large scale programming and they couldn’t do the hard parts of my work, but a lot of what I do doesn’t need to be “great.”
There’s the core system design and delivering of features. That it struggles with. Anything large seems to be a struggle.
But generating SQL for a report I do sporadically on demand from another team?
Telling me what to debug to get Docker working (which I am rarely doing as a dev)? Anything Shell or Nginx related (again, infrequent, so I am a beginner in those areas)
Generating infrequently run but tedious formatting helper functions?
Generating tests?
Basically, what would you give a dev with a year of experience? I would take ChatGPT/Copilot over me with 1 year of experience.
The biggest benefit to me is all the offloaded non-core work. My job at least involves a lot more than writing big features (maybe yours does not).
by CPLX on 2/21/24, 11:27 PM
I have been involved in software and implementing technical things since the late 90s and from time to time have been pretty good at a few things here and there but I am profoundly rusty in all languages I sort of know and useless in ones I don’t.
But I’m technical. I understand at sort of a core level how things work, jargon, and like the key elements of data structures and object oriented code and a MVC model and whatever else. Like I’ve read the right books.
Without ChatGPT I am close to useless. I’m better off writing a user story and hiring someone, anyone. Yes I can code in rails and know SQL and am actually pretty handy on the command line but like it would take me an entire day and tons of googling to get basic things working.
Then they launched GPT and I can now launch useful working projects that solve business problems quickly. I can patch together an API integration on a Sunday afternoon to populate a table I already have in a few minutes. I can take a website I’m overseeing and add a quick feature.
It’s literally life changing. I already have all the business logic in my head, and I know enough to see what GPT is spitting out and if it’s wrong and know how to ask the right questions.
Unlike the OP I have no plans to do anything complex. But for my use cases it’s turned me from a project manager into a quick and competent developer and that’s literally miraculous from where I’m standing.
by ryzvonusef on 2/22/24, 2:48 PM
I'm not a programmer, I'm a student in acc/fin, to use a weird analogy, if you are a chef, I'm a stereotypical housewife, and we think differently about knives (or GPTs).
I differentiate between tuples, lists and dictionaries not by the definition, but by the type of brackets they use in Python. I use Python because it's the easiest and most popular tool, and I use Phind and other GPT tools because programming is just a magic spell for me to get to what I want, and the less effort I have to spend the better.
But it doesn't mean that GPTs don't bring their own headaches too. As I get more proficient, I now realise that GPTs are now giving me bad or inefficient advice.
I can ask a database related question and then realise, hang on, despite me specifying this is for Google BigQuery, it's giving me an answer that involves some function I know is not available on it. Or I read the the code it recommends for pandas and realise, hang on, I could combine these two lines into one.
I still use GPT heavily because I don't have time to think about code structure, I just need the magic words to put into the Jupyter cell, so I can get on with my day.
But you don't, and you actually think about these things, and you are realising the gaping flaws in the knife's structure. That's life. YOu have a skill and there comes pros and cons with it.
Like a movie reviewer who can no longer just go to the cinema and enjoy something for the sake of it... you also can't just accept some code from a GPT and just use it, you can't help not analyse it.
by vault on 2/21/24, 11:26 PM
by electric_mayhem on 2/21/24, 11:17 PM
I find that kind of heartening, honestly.
But it’s by no means a death sentence for AI. Plenty of dimensions for massive improvement.
by calgoo on 2/21/24, 11:23 PM
by schmookeeg on 2/22/24, 12:36 AM
I code all over the stack, usually some bizarre mix of python, pyspark, SQL, and typescript.
TS support seems pretty nice, and it can optimize and suggest pretty advanced things accurately.
Py was hopeless a few months ago, but my last few attempts have been decent. I've been sent down some rabbitholes though, and been burned -- usually my not paying attention and being a lazy coder.
PySpark is just the basics, which is fine if I am distracted and just want to do some basic EMR work. More likely, though, I'll rummage my own code snippets instead.
The speed of improvement has been impressive. I'm getting enthused about this stuff more and more. :)
Plus, who doesn't enjoy making random goofy stuff in Dall-E while waiting for some progressbar to advance? That alone is worth the time investment for me.
by nicklecompte on 2/22/24, 3:51 AM
I was testing ChatGPT-3.5 with F# in 2023 and saw some really strange errors. Turns out it was shamelessly copying from GitHub repos that had vaguely related code to what I was asking - this was easy to discover because there's not much F# out there. In fact the relative sparsity of F# is precisely why GPT-3.5 had to plagiarize! It did not take long to find a prompt that spat out ~300 lines verbatim from my own F# numerics library. (I believe this problem is even worse for C numeric programmers, whose code and expertise is much more valuable than anything in .NET.) OpenAI's products are simply unethical, and I am tired of this motivated reasoning which pretends automated plagiarism is a-okay as long as you personally find it convenient.
But even outside of plagiarism I am really nervous about the future of software development with LLMs. So often I see people throwing around stats like "we saw a 10% increase in productivity" without even mentioning code quality. There are some early indications that productivity gains in LLM code assistance are paid for by more bugs and security holes - nothing that seems catastrophic, but hardly worth dismissing entirely. What is frustrating is that this was easily predictable, yet GitHub/OpenAI rushed to market with a code generation product whose reliability (and legality) remains completely unresolved.
The ultimate issue is not about AI or programming so much as software-as-industrial-product. You can quickly estimate increases in productivity over the course of a sprint or two: it's easy to count features cleared and LoC written. But if there are dumb GPT "brain fart" errors in that boilerplate and the boilerplate isn't adequately reviewed by humans, then you might not have particularly good visibility of the consequences until a few months pass and there seem to be more 5-10% bug reports than usual. Again, I don't think the use of Copilot is actually a terrible security disaster. But it's clearly a risk. It's a risk that needs to be addressed BEFORE the tool becomes a de facto standard.
I certainly get that there's a lot of truly tedious boilerplate in most enterprise codebases - even so I suspect a lot of that is better done with a fairly simple deterministic script versus Copilot. In fact my third biggest irritation with this stuff is that deterministic code generation tools have gotten really good at producing verifiably correct code, even if the interface doesn't involve literally talking to a computer.
by timrobinson333 on 2/22/24, 10:54 PM
I find I spend most of my time thinking about the problem domain and how to model it in logic, and very little time just banging out boilerplate code. When I want to do the kind of task a lot of people will ask gpt for, I find it's often built into the language or available as an existing library - with experience you realise that the problem you're trying to solve is an instance of a general problem that has already been solved.
by tdudhhu on 2/21/24, 11:25 PM
At the core AI/ML is giving you answers that have a high probability of being good answers. But in the end this probability is based on avarages. And the moment you are coding stuff that is not avarage AI does not work anymore because it can not reason about the question and 'answer'.
You can also see this in AI generated images. They look great but the avarage component makes them all look the same and a kind of blurry.
For me the biggest danger of AI is that people put too much trust in it.
It can be a great tool, but you should not trust it to be the truth.
by swasheck on 2/21/24, 11:20 PM
by fzzzy on 2/22/24, 1:38 PM
I have had some experiences where I did not know the language or library that I wanted to work with, and the results were disastrous. It hallucinates exactly the api that I want, but unfortunately it doesn't exist. As an aside -- that api SHOULD exist. The api design is very obviously missing this extremely important feature. I think llms may help people to figure out what kind of apis and features should be intuitively expected and increase software quality. But it still wasted a ton of my time.
My conclusion is actually that llm copilots will be bad for beginners because they don't know enough to specify the request in enough detail, and they don't know enough to see any errors and be able to understand why they were generated.
by anon291 on 2/21/24, 11:35 PM
On the other hand? If I want to add payments to my web app (not a web dev, but I mess around with side projects for fun), and I don't want to read the stripe library docs for example, it's pretty good at getting me started installing the right library, and generating the boiler plate. Same with various boilerplate for code generation.
On the other hand, ChatGPT (which has been clearly trained on stuff I've read) has an uncanny ability to describe my esoteric side projects much better than I can, so there is that. I have some DSLs written in Haskell and it's actually really good at providing answers with it, and explaining things succinctly.
by throwAway139Z on 2/22/24, 2:46 PM
With that being said, IMO, you can optimize your dev work to get the most use from GPT4 by providing better context, examples etc. these don't really fit into the co-pilot workflow but can fit into instruction-tuned model workflows. (GPT4 / Mixtral / Mistral / CodeLLAMA etc.)
E.g. use the first 50k tokens in the context window to pre-prompt with relevant context from your code repo/sdk documentation/examples whatever prior to actually prompting with your task. And demonstrate a test in your prompt that displays what you want. Taking a "few-shot" approach with specified validation (in your test) results in considerable improvements in accuracy for programming (and other) tasks.
(YMMV)
by usgroup on 2/23/24, 10:51 AM
I'd also ask it to re-write existing code "more canonically" or "more succinctly". Again, it fails to do this almost every time, but often it uses something that I didn't know existed whilst doing it, which I found to be valuable.
by tourmalinetaco on 2/22/24, 3:38 AM
I’ve found its best at completing repetitive but simple tasks, such as organizing data or making simple scripts. Anything more complex or that cannot be easily explained in a way it “understands” will run into problems. For instance, telling it to program a custom variant of Dijkstra’s Algorithm for my specific input data? No go, it doesn’t work. However, a simple script to replace any text from column 1 in a CSV with its neighbor text in column 2? Works first try.
by jarjoura on 2/21/24, 11:26 PM
All I use it for is to avoid repetitive stuff as it's exceptionally good at guessing my next step.
The autocomplete bits feel wrong most of the time, and as fast as API updates happen it's mostly a wash in terms of productivity.
by Garlef on 2/22/24, 2:26 PM
* For my current gig, I recently had to play the best computer game of all time: Linux. ChatGPT was invaluable here. "How do I figure out where the network connections are managed on this device?"
* Also ChatGPT managed to assist me in creating a task breakdown for a prototype that I was building
2) Area where it was of some help. I think using traditional methods would have been just as good in these cases:
* Sample code for the usage of TS clients for MS Graph and notion
* generating an almost working OPC UA dummy server in TS using "node-opcua"
3) Areas where it was of no help and mostly a one way box where I could enter my thoughts. Not sure if using ChatGPT here was a win or a loss, though - you could still think of the process as some form of notetaking with feedback:
* Working on some complex tree recursion things
* trying to come up with the interface for an infrastructure-as-code framework
by perrygeo on 2/22/24, 4:18 AM
by joegibbs on 2/21/24, 11:33 PM
You're right though that it isn't very useful for complex problems. Complex SQL queries in particular are hopeless. But I probably only spend 10-20% of my time on stuff where I really have to think about what I'm doing anyway.
The autocompletion is the best part for me. When I start writing a function called `mapUsersToIds` or whatever and it autocompletes the 3 or 4 lines of code that I now don't need to write, that saves me a ton of time - I'd say I'm 30% more productive now that I don't need to spend time typing, even if the autocompleted code is exactly what I was going to write in the first place.
by delduca on 2/22/24, 10:34 AM
Both tools work if you don't care about the quality of the code and are working with boring tech, otherwise, it's a total disaster.
by Cloudef on 2/22/24, 3:32 AM
by skp1995 on 2/21/24, 11:37 PM
When it comes to making a more complicated change, things are never easy because of the limited context window and the general lack of reasoning and inherent knowledge of the codebase I am working with.
Having said this, GPT4 has been really good for the one off questions I have about either syntax or if I forget how to do "the thing I know is possible I am just not sure" or the mundane things like docker commands or some other commands which I need help with.
But... if you guys have seen Gemini-1.5 Pro I was seriously mind blown and I think the first time I felt a LLM is better than me and that has to do with code search. I have had my fair share of navigating large codebases and spending time understanding implementations (clicking go-to-reference, go-to-definition) and keeping a mental model.. the fact that this LLM can take a minute to understand and answer questions about codebase does feel like a game changer.
I think the right way to think about AI tooling for programming is not to ask them to go and build this insane new feature which will bring lots of money for you, but how they can help you get that edge in your daily workflows (small quality of life changes which compound over time, just like how LSP is taken for granted in the editor now a days).
Another point to mention here which I believe is a major miss is that these copilots write code without paying attention to the various tools we as humans would use when writing code (LSP, linters, compilers etc). They are legit writing code like they would on a simple notepad and that is another reason why the quality is often times pretty bad (but copilot has proved that with a faster feedback loop and the right UX its not too big a hassle)
We are still very early in this game and with many people building in this space and these models improving over time I do think we will look back and laugh how things were done pre-AI vs post-smart-AI models.
by sim7c00 on 2/22/24, 2:14 PM
me: Can you tell me how to implement XYZ without using standard library? gpt: use function y from the standard library.
throws computer out of window
it can't even properly understand basic language. it remembers occurences and does statistical analysis... it just prentends it understands, in all cases, even if its accidentally right. a multi-billion dollar project to get answers from a toddler blurting out random words it's heard around it...
by urbandw311er on 2/21/24, 11:30 PM
by mcint on 2/21/24, 11:31 PM
With too much assumed context, it only does a good job of spitting out the answer to a common problem, or implementing a mostly correct version of the commonly written task similar to the one requested.
When you use copilot, are you shaping your use to its workflows? Adding preceding comments to describe the high-level goal, the high-level approach, and other considerations for how the code soon to follow interacts with the rest of the codebase?
by ncallaway on 2/21/24, 11:34 PM
by krukah on 2/21/24, 11:23 PM
- generate short blocks of low-entropy code (save some keystrokes)
- get me off the ground when using a new library (save some time combing through documentation)
by dkersten on 2/22/24, 3:58 PM
It is NOT good at generating complex functions or code but simple cases and I do not use copilot for “comment driven development” as my experience with that wasn’t good.
I use it as a fancy autocomplete, I use it to explain code, I use it for refactoring code (eg it’s good at tasks like “change all variables backed to snake_case”), I use it to provide api usage examples for functions, I use it to generate basic docstrings (which I then hand edit to add the detail I need), I use it to point out flawed in code (it’s bad at telling me if code is good, but it’s pretty good at spotting mistakes if the code is bad).
I’ve also used it to generate functions or algorithms that I wouldn’t have been able to do by myself and that includes complex SQL (TimescaleDB windowed queries), in an iterative process. I find the best results are when you generate short snippets at a time with as few requirements as possible but with as much detail on what you’re asking for as possible, and then it’s an iterative process. You also need to guide it, it works if you already know how you want to approach it and just want the AI to figure out the nitty gritty specific details. I’ve used this to generate a concurrent bitset-based memory allocator. I don’t think I could have done it by myself.
I’ve also had success generating little Python scripts in their entirety, although to get the exact behaviour I wanted was a process of repeatedly pointing out problems and eventually noticing it was not where the LLM was editing. So it’s important you understand every bit of code that it generates and don’t accept code until you’re certain you understand it.
As an example of a recent script it generated for me, it reads a TOML file containing a table, it looks for a key that you specify as a command line arg. It then generates a CSV for all keys it finds. Eg it turns this:
[test.a]
x = 1
y = 2
[test.b]
x = 3
z = 4
Into this: x,y,z
1,2,
,3,4
It did this a lot faster than it would have taken me to do it myself.I’ve also had good experience using ChatGPT to do pros and cons list of approaches or do some basic research for new eg into algorithms (now that it can search the web).
by rogierhofboer on 2/21/24, 11:33 PM
I already gave up on anything complex, but it also fails at relative simple things.
It goes like this: The first answer does something useful, but is not the full solution or contains a bug.
When told, it apologizes and gives code that does not even compile. Then when trying to get it in the right direction, it gets worse and worse.
Then it hallucinates a non-existing library,that should solve the problem.
And in the end I end up writing the code myself...
by Szpadel on 2/22/24, 7:20 PM
so letting it write some boilerplate (copilot) or solve simple problem that I don't want to waste my time for is providing me enough value to be worth spending money
also I find it very useful in filling out tests, where it mostly correctly fills other test cases given single example - again, not hard but time consuming
for harder problems I often find it useful when I debug something that I'm not familiar with, it usually give half of very generic advices but often suggest some valid starting points in investigation (eg recently I had to debug openvpn server and I didn't touch them in years, issue was manifested with error about truncated packets and gpt was able to correctly suggest TLS negotiation)
sometimes I also use it rewrite/refactor code to see if maybe I would like different style/approach - this is rather rare, at begining I found it useful, now I think gpt started to often corrupt code or replace sections with comment with makes it much less useful
by srameshc on 2/21/24, 11:36 PM
by mat41 on 2/21/24, 11:21 PM
by robertheadley on 2/23/24, 5:32 PM
I can't code, but I undersand coding concepts like variables, Booleans, etc.
by littlestymaar on 2/21/24, 11:24 PM
Be careful with that too, it will also spit out whatever urban legend it read on a subject without making a difference between the facts it got on Wikipedia and the bullshit it read elsewhere.
Keep in mind, they are language models, not knowledge models.
by icar on 2/22/24, 7:16 PM
by b20000 on 2/22/24, 4:19 AM
the issue is that it will take a long time before they wake up and scramble to re-hire the people they laid off. in the meantime i predict a lot of developers will have pivoted to other careers, retired, became AI devs etc.
by JohnFen on 2/22/24, 3:23 PM
by reustle on 2/22/24, 2:40 AM
by chenpeleg on 2/24/24, 11:57 AM
by Zelphyr on 2/22/24, 4:54 PM
by aChattuio on 2/22/24, 2:03 PM
It helps tremendously.
Also when you look what Google is doing internally: gpt might not be perfect right now, but I would try to keep up with testing/using it regularly enough to not loose sight.
by d--b on 2/22/24, 9:46 AM
1. Complete tedious / repetitive code
2. Help with new/rarely used things. For instance, I'd ask it: write a function that fetches the price of the Apple stock from Bloomberg for the year 2023. And then I can use that a tweak it as I want.
by reactordev on 2/21/24, 11:35 PM
by bugglebeetle on 2/21/24, 11:18 PM
by yonatan8070 on 2/22/24, 6:59 PM
by afpx on 2/21/24, 11:20 PM
I save well over an hour a month with it, so it's worth it.
by mulle_nat on 2/21/24, 11:25 PM
by vendiddy on 2/22/24, 1:24 PM
I used to get very helpful responses but recently find myself correcting it a lot.
by karmasimida on 2/21/24, 11:47 PM
by sam0x17 on 2/22/24, 1:31 PM
by mmusc on 2/22/24, 6:26 AM
Other than that don't trust it.
by ochronus on 2/22/24, 3:02 PM
by slotrans on 2/22/24, 5:55 AM
by tomfunk on 2/21/24, 11:14 PM
by SergeAx on 2/24/24, 6:56 PM
by pwb25 on 2/22/24, 10:36 AM
some days i write 2 lines of code, or just try to find stuff. all this copilot talk feels like its never about real software development
by lulznews on 2/22/24, 1:40 AM
by aristofun on 2/22/24, 2:17 PM