by meinersbur on 5/12/25, 12:01 PM with 41 comments
by TheDong on 5/12/25, 1:46 PM
I'm on the side of the CoC committee who told the author they engaged without enough consideration or kindness.
Reporting bugs is nice. It's less nice if, when a maintainer asks for a clearer reproduction, you respond with "I already gave you a reproduction, even if you have to edit it a little. I'm not a programmer, all I can give you is some AI spam. I'll leave it up to you to do your jobs" (edited only lightly from what the author really wrote).
by kordlessagain on 5/12/25, 1:46 PM
by hyperhello on 5/12/25, 1:07 PM
I don't really have a dog in the race, but I think people should react this way to AI communication. They should be shunned and informed in no uncertain terms that they are not welcome to communicate any more.
by tomovo on 5/12/25, 1:19 PM
> it further demonstrated my good intentions
> "you are arguing with a law professional"
> "AI summary," ... shows the effort I am willing to invest ...
Wow.
by QuadmasterXLII on 5/12/25, 1:32 PM
by yeputons on 5/12/25, 1:17 PM
Open source thrived back in early 2000-s too. Although I don't remember anything even remotely resembling Code of Conduct back then, I wasn't paying attention. Was it a thing?
I found that Drupal adopted CoC in 2010, and Ubuntu had one already no later than 2005 (the "Ubuntu Management Philosophy" book from 2005 mentions it).
by malcolmgreaves on 5/12/25, 1:25 PM
EDIT -
> Once again these two Gentoo developers showed a lack of good manners.
…
> hold a personal grudge against me
Yes, indeed this non technical person seems to have found that while they don’t have a mind sharp enough for software, nor the respect and understanding that they can’t talk to people the same way they do as a lawyer, they’ve well on their way into the subculture of posting their emotional rants onto the internet. (haha!)
by jcranmer on 5/12/25, 1:32 PM
LLVM has already found that AI summaries tend to provide negative utility when it comes to bug reports, and it has a policy of not using them. The moment you admit "an AI told me that...", you've told every developer in the room that you don't know what you're doing, and very likely, trying to get any useful information of you to be able to resolve the bug report is going to be at best painful. (cf. https://discourse.llvm.org/t/rfc-define-policy-on-ai-tool-us...)
Looking over the bug report in question... I disagree with the author here. The original bug report is "hi, you have lots of misnamed compiler option warnings when I build it with my toolchain" which is a very unactionable bug report. The scripts provided don't really provide a lot of insight into what the problem might be, and having loads and loads of options in a configure command increases the probability that it breaks for no good reason. Also, for extra good measure, the link provided is to the latest version of a script, which means it can change and no longer reproduce the issue in question.
Quite frankly, the LLVM developer basically responded with "hey, can you provide a better, simpler steps to reproduce?" To which the author responds [1] "no, you should be able to figure it out from what I've given already." Which, if I were in the developer's shoes, would cause me to silently peace out at that moment.
At the end of the day, what seems to have happened to me is that the author didn't provide sufficient detail in their initial bug report and bristled quite thoroughly at being asked to provide more detail. Eli Schwartz might have crossed the line in response, but the author here was (to me) quite clearly the first person to have thoroughly crossed the line.
[1] Direct link, so you can judge for yourself if my interpretation is correct: https://github.com/llvm/llvm-project/issues/72413#issuecomme...
by watusername on 5/12/25, 1:51 PM
There's no denying that AI is helpful, _when_ the human has some baseline knowledge to check the output and steer the model in the correct direction. In this case, it's just wasting the maintainers' time.
I've seen many instances of this happening in the support channels for Nix/NixOS, which has been around long enough for the models to give plausible responses, yet too niche for them to emit usable output without sufficient prompting:
"No, this won't work because [...]" "What about [another AI response that is incorrect]?" (multiple back-and-forths, everyone is tired)
by GrantMoyer on 5/12/25, 1:42 PM
by KingOfCoders on 5/12/25, 1:56 PM
"But I am not blaming you for not having a degree in software engineering."
But then
"And you admitted that the large scripts in question contain hardcoded information such as your personal computer's login username, clearly those scripts won't work out of the box on someone else's machine".
while the other committer ends with
"You're free to propose a patch yourself instead. "
So the committer is acknowledging that the user is no software developer, but then the two of them demand the user to do things that the user might not be able to do.
That's not going to work.
by KingOfCoders on 5/12/25, 1:22 PM
by high_na_euv on 5/12/25, 1:13 PM
by Zufield on 5/15/25, 12:58 PM
by itsanaccount on 5/12/25, 1:24 PM
What an amazingly effective phrase to get open source developers to do what you want. /s
by rho4 on 5/12/25, 1:16 PM