by brindidrip on 12/4/22, 5:07 PM with 323 comments
by brindidrip on 12/4/22, 6:26 PM
Note: This answer was generated by ChatGPT after being fed this thread.
by josephcsible on 12/4/22, 6:56 PM
by pugworthy on 12/4/22, 8:03 PM
And some queries are just not acceptable on SO, but fine for ChatGPT.
For example I might wish to ask, "Give me the framework for a basic API written in Python that uses API key authentication. Populate it with several sample methods that return data structures in json."
If I ask that on SO, I'll be down voted and locked before I know it. I may also get some disparaging comments telling me to do my research, etc.
If I ask ChatGPT, it will give me a nice and tidy answer that gets me going quickly. It will explain things too, and allow me to ask follow up questions and take my requests for refinements. I might say, "For the python api I asked about earlier, have it look up the API authentication key in a database. If the key is in the database, it is valid." - and bam - it does it.
Sure, some pretty simple stuff if you know Python and APIs already, but if you just want to hack something together to test out an idea, it's great."
In the end, SO is a query with responses (maybe). ChatGPT is a conversation that can go beyond just the initial query.
by senko on 12/4/22, 6:00 PM
Wait a few weeks until Google is completely swamped with ChatGPT SEO pages barely distinguishable from the real thing.
If I worked at search quality at Google, I'd be very worried.
by clusterhacks on 12/4/22, 7:24 PM
I would not be surprised if the quality of human writing actually goes up. I have this weird feeling that ChatGPT and similar tools will become almost equivalent to calculators for math? My experience as a writer is that sometimes just throwing down a first draft is the hardest step - I could see these tools really assisting in the writing process. Generate a draft, do some tweaking, ask for suggestions or improvements, repeat.
I don't know how I feel about code generated by these tools. Will there be a similar benefit compared to writing? At some level, we will need some deeper mastery of writing and coding to use these things well. Is there a complexity cliff that these tools will never be able to overcome?
A total lack of trust for general internet search results. So much content is already shallow copies of other content. I don't see how general internet search survives this.
by ChrisMarshallNY on 12/4/22, 6:12 PM
That's a huge problem with "gamification." I'm not especially a fan of the concept, in a venue like SO. I think it has led to a rather nasty community, and I hardly ever go there, anymore.
I assume that we'll be seeing a lot of robotic HN content (I would not be surprised if it is already here, but has been sidelined by the mods).
by avivo on 12/4/22, 6:55 PM
- https://meta.stackoverflow.com/questions/421778/how-do-you-p...
- https://meta.stackoverflow.com/questions/412696/is-it-accept...
- https://meta.stackexchange.com/questions/384355/could-chatgp...
by pcthrowaway on 12/4/22, 8:23 PM
But it's also a serious concern from a security standpoint. If ChatGPT is providing incorrect answers, it could lead to people implementing flawed code or making poor decisions based on its advice. That could have potentially disastrous consequences.
So overall, it's a big problem that needs to be addressed. It's not just about making the site more pleasant to use, it's about ensuring the integrity and reliability of the information provided.
My prompt:
I'm writing a short story where Linus Torvalds is having a conversation with an open source contributor. In this conversation, Linus is in a bad mood.
Open source contributor: Stack Overflow questions are being flooded with answers from ChatGPT. What are the possible repercussions of this?
Linus Torvalds:
by Yuyudo_Comiketo on 12/4/22, 6:14 PM
by egypturnash on 12/4/22, 7:10 PM
Or at least that's what all of you kept telling me when I was expressing my unhappiness at the way corporate-sponsored image generating black boxes are built atop a shaky moral foundation that sure feels like it's ignoring anything anyone talking about "fair use" ever dreamed of, and at the way I fear it's going to hollow out a ton of the beginner-pro jobs of my industry by making it super easy for anyone to generate stuff that is kinda fundamentally shitty in a lot of important ways, but "good enough" if you just have a space to fill with some decoration that you don't really give a crap about.
by palisade on 12/4/22, 10:27 PM
I think all ChatGPT is doing is grabbing a lot of different answers off the interwebz and squishing them together and hoping it answers your question. But, in a lot of cases it only kind of looks like what you want. If you look at images generated by AI, it is the same issue, they sort of look like what you want but there are flaws, like faces that don't look quite human, fingers that are just squishy appendages barely resembling actual fingers, etc. I mean, the tech is getting better, it's impressive, and uncanny.
But, I think we're pretty far from having these things write themselves, they need quite a lot of human intervention to be useful. Still, very impressive and something that could potentially get you closer to an answer. But, no more than spending a little time googling or learning the skill yourself. And, if you learn the skill you're better off, because then you can do it right yourself IMHO.
Also, anytime someone gets a fully working program generated out of this thing the saying, "A broken clock is right twice a day." comes to mind.
by iamflimflam1 on 12/4/22, 7:28 PM
Q: how can I tell if a stackoverflow answer is generated by ChatGTP if someone has removed the "ChatGTP" tag from the text?
A: One way to tell if a StackOverflow answer is generated by ChatGTP is to look for certain characteristics in the answer. ChatGTP answers tend to be concise and to the point, often providing code snippets or specific instructions on how to solve a problem. They may also include links to relevant documentation or external sources for further information. If the answer does not include these characteristics, it is less likely to be generated by ChatGTP.
by ubj on 12/4/22, 6:42 PM
I'm bracing myself for when this wave of AI content hits academic journals.
by imhoguy on 12/4/22, 8:27 PM
by michaelteter on 12/4/22, 6:53 PM
At this point, SO has been scraped and repackaged (poorly) dozens of times, and SEOd to the top of search results. Even some "tutorial" sites are just repackaged SO answers.
It is only fitting that the automated SEO websites get fed automated content.
In a way, this makes the real humans, particularly the ones who know actual things, more valuable. It may so much noise that only a skilled human could decipher a real question and a real answer or solution from something similar but wrong.
To be fair to GPT, many human answers are sub-par and should be filtered out as well. Perhaps that's the real test: what percentage of GPT answers are decent vs human answers? Here I might bet on GPT.
by shagie on 12/5/22, 2:29 PM
> Use of ChatGPT generated text for posts on Stack Overflow is temporarily banned.
> This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.
(much more to that post and comments and answers and comments)
by duckmysick on 12/4/22, 10:16 PM
by xx__yy on 12/4/22, 9:55 PM
Inaccurate or irrelevant answers: ChatGPT is a machine learning model that uses past data to generate responses. This means that it may not always provide accurate or relevant answers to questions, leading to confusion and frustration among users.
Loss of trust: If users notice that many of the answers on the forum are coming from ChatGPT, they may lose trust in the forum and stop using it. This could lead to a decline in user engagement and overall traffic.
Competition with human contributors: ChatGPT's answers may compete with those provided by human contributors, leading to a decrease in the quality and value of the content on the forum. This could make the forum less useful and engaging for users.
Increased moderation: The influx of answers from ChatGPT may require more moderation to ensure that the answers are accurate and relevant. This could require additional resources and time for moderators, leading to increased costs and workload.
by brindidrip on 12/4/22, 6:28 PM
To detect a response generated by ChatGPT, we could first analyze the content of the response to see if it contains any unnatural or repetitive language. We could also check the formatting of the response to see if it follows the typical conventions used by human responders on the platform. Additionally, we could check for any unusual patterns in the timestamps of the response, as AI-generated responses may be posted more quickly or regularly than responses written by humans. Finally, we could also use machine learning algorithms to train a model to identify responses generated by ChatGPT based on these and other characteristics.
Quick, someone ask ChatGPT to generate the stubs.
by hysan on 12/4/22, 7:03 PM
by akrymski on 12/4/22, 9:32 PM
by charles_f on 12/4/22, 9:24 PM
by anigbrowl on 12/4/22, 9:45 PM
I have an OpenAI account and like their product, I'm certainly impressed by this latest version though I have had little time to play with it. But the combination of quality AI with social reputation scoring is absolutely toxic, and the wider impact of SEO (a less curated version of the same thing) are a disaster. I was already sick of all the tutorial sites like geeks4geeks, w3schools etc and their numerous imitators just content farming whatever is turning up in searches. Marketing and self promotion is cancer and the people who try to game their way to success in this manner are awful. Perhaps the best use of counter-AI will not be in filtering these people, but in providing hem with useless rewards and the appearance of excited fanbases that will divert them into a parallel hamster wheel web. Nothing would please me more than for the top 5000 influencers of this sort to be granted exclusive access to a luxury cruise that leaves port once a year for a tour of the Bermuda triangle.
I think the best use of ChatGPT would be in an IDE plugin, so you could point at function trees or code blocks and ask it to explain things, have it take care of basic refactoring tasks, help porting between languages or libraries and so on. I can definitely see a future where you throw together a working prototype of something, answer a few questions about type hinting and edge cases, and AI does the legwork of converting the prototype into a strongly typed final product.
by KomoD on 12/4/22, 9:15 PM
[1]: https://stackoverflow.com/users/19192614/boatti?tab=topactiv...
[2]: https://stackoverflow.com/users/20684429/a-s?tab=topactivity
by cma on 12/4/22, 7:15 PM
It will start feeding back into the training set, corrupting things. OpenAI will have an advantage at first as they can trivially filter out everything they have generated from the future training corpuses, since you can only run it through their servers. If they or someone else has breakaway progress such that almost all generated content is from their own servers because users only use them because their results are so much better, they could form a strong self-reinforcing moat against competitors forced to train on their semi-spam which they can trivially filter out.
It's also possible we'll see something like the existing big-tech patent cross-licensing agreements, where they all agree to share their generated outputs to filter from training, making it very hard for new entrants.
Other companies will begin having advantages as well, depending on how well they can get less tainted user data. Think of Discord, for example, where users may use AI but are less likely to gamify it like stack overflow and flood it for points, and instead be correcting its output etc. in programming discussions.
As things become more accepted Microsoft will probably eventually sell access to private github for training, with some stronger measures around avoid rote memorization.
by karmasimida on 12/4/22, 7:15 PM
I think ChatGPT is actually sometimes a lot better than SO answers
by ggerganov on 12/4/22, 6:20 PM
As a human, I cannot give an accurate estimate. /joke
by brindidrip on 12/4/22, 5:13 PM
by johndough on 12/4/22, 6:16 PM
by fhsjaifbfb on 12/7/22, 9:00 AM
by lajosbacs on 12/4/22, 9:02 PM
So double whammy for SO which makes me feel really sad.
by lr1970 on 12/4/22, 10:47 PM
by seydor on 12/4/22, 6:02 PM
by yhusain on 12/5/22, 6:33 AM
by yhusain on 12/5/22, 6:36 AM
by Ancalagon on 12/4/22, 7:46 PM
The only thing we can be sure of, is that whatever we can imagine is already behind what the AI will become.
by softwaredoug on 12/4/22, 6:01 PM
by solardev on 12/4/22, 5:38 PM
by l0b0 on 12/4/22, 10:55 PM
¹ Old sites are probably going to slowly degrade permanently, since they can't easily migrate to a new paradigm.
by deafpolygon on 12/4/22, 8:41 PM
by nyokodo on 12/4/22, 8:35 PM
by Yorch on 12/4/22, 7:44 PM
by Oxidation on 12/4/22, 10:43 PM
2023: hyperinflation of internet points.
by hxugufjfjf on 12/4/22, 5:10 PM
by passion__desire on 12/4/22, 7:16 PM
by phenkdo on 12/4/22, 8:00 PM
by ineedausername on 12/5/22, 11:02 AM
by zasdffaa on 12/4/22, 8:42 PM
by SergeAx on 12/4/22, 10:13 PM
by roland35 on 12/5/22, 2:38 AM
by adverbly on 12/4/22, 8:09 PM
by shinycode on 12/4/22, 7:20 PM
by khiqxj on 12/9/22, 4:25 PM
by gysfjiutedgj on 12/6/22, 2:53 AM
by funshed on 12/5/22, 3:24 PM
by Ancalagon on 12/4/22, 7:44 PM
by Phenomenit on 12/4/22, 8:17 PM
by fuzzfactor on 12/5/22, 7:14 AM
Could make those known to be human more acceptable as such.
by Gupie on 12/4/22, 10:13 PM
by daxfohl on 12/4/22, 6:51 PM
by notaspecialist on 12/5/22, 8:25 AM
by ricardobayes on 12/4/22, 7:05 PM
by daemon_9009 on 12/11/22, 5:32 AM
by hdufort74 on 12/4/22, 7:24 PM
I have a collection of about 25 prompts such as these, in my benchmark.
I have run these examples through different applications such as AI Dungeon, OpenAI Playground, NovelAI, etc. Results vary a lot. In some cases, the results look good but upon closer inspection, you realize that the AI keeps providing the sake exact answer. It is the case for the ice cream prompt. Pickle, fried chicken, curry keeps showing up. I guess the model contains a few specific examples of original ice cream recipes and just pick them.
For the Pokemon and "new word" prompt, models failed to come up with anything original. Until I tried OpenAI Playground this week and finally got some really creative answers, with variety.
AI Dungeon (2 years ago) was already good at faking tech support steps. OpenAI is amazingly good, although in most cases it provides solutions that only make sense superficially. It's the ultimate bullshit engine.
Another word of caution. While OpenAI can now guesstimate what a code snippet does, and can generate some pretty good code in many languages (ice tried 6809 assembler and the results surprised me), it is very unreliable.
More alarming is the fact that it's a text engine, not a math formula interpreter. It gets confused at simple equations and cannot interpret anything that's not already ordered (it cannot apply operator priority or respect parentheses).
I think it will become increasingly difficult to identify contents coming from ChatGPT and other chatbots or story generators. An arm's race might be futile. We should apply stricter rules to identify problematic answers: answers that are too generic or vague and can't be used to directly solve a practical problem, and answers that contain incorrect or misleading information. Identifying vague or non-practical questions might also help in avoiding a deluge of Chatbot answers. Some users will ask very general questions, and then it becomes difficult to evaluate the answers. Or, users will ask questions that were already answered in the past. The proper way to handle those is to point then to the prior discussion and avoid duplicating it. The wrong way is a Chatbot or a human seizing the opportunity to copy-paste existing contents for a quick win.
In a way, chatbots and humans can both provide useful insights, as well as useless or incorrect answers. But so far, only a human can provide a proper answer to a moderately complex technical question if no prior answer exists.
by datalopers on 12/4/22, 7:34 PM
by laerus on 12/4/22, 8:44 PM