from Hacker News

“They introduce kernel bugs on purpose”

by kdbg on 4/21/21, 10:32 AM with 1912 comments

  • by dang on 4/21/21, 7:00 PM

    This thread is paginated, so to see the rest of the comments you need to click More at the bottom of the page, or like this:

    https://news.ycombinator.com/item?id=26887670&p=2

    https://news.ycombinator.com/item?id=26887670&p=3

    https://news.ycombinator.com/item?id=26887670&p=4

    https://news.ycombinator.com/item?id=26887670&p=5

    https://news.ycombinator.com/item?id=26887670&p=6

    https://news.ycombinator.com/item?id=26887670&p=7

    (Posts like this will go away once we turn off pagination. It's a workaround for performance, which we're working on fixing.)

    Also, https://www.neowin.net/news/linux-bans-university-of-minneso... gives a bit of an overview. (It was posted at https://news.ycombinator.com/item?id=26889677, but we've merged that thread hither.)

    Edit: related ongoing thread: UMN CS&E Statement on Linux Kernel Research - https://news.ycombinator.com/item?id=26895510 - April 2021 (205 comments and counting)

  • by rzwitserloot on 4/21/21, 12:28 PM

    The professor gets exactly what they want here, no?

    "We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".

    That'll be a fun paper to write, no doubt.

    Additional context:

    * One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].

    Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].

    The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.

    As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?

    * Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]

    [1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2

    [2]: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

    [3]: https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...

  • by ENOTTY on 4/21/21, 10:55 AM

    Later down thread from Greg K-H:

    > Because of this, I will now have to ban all future contributions from your University.

    Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.

    EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.

    In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].

    Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.

    [1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22

    [2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...

    [3]: https://www-users.cs.umn.edu/~kjlu/

    [4]: http://cobweb.cs.uga.edu/~wenwen/

  • by ltfish on 4/21/21, 3:03 PM

    Some clarifications since they are unclear in the original report.

    - Aditya Pakki (the author who sent the new round of seemingly bogus patches) is not involved in the S&P 2021 research. This means Aditya is likely to have nothing to do with the prior round of patching attempts that led to the S&P 2021 paper.

    - According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.

    Greg has all reasons to be unhappy since they were unknowingly experimented on and used as lab rats. However, the round of patches that triggered his anger *are very likely* to have nothing to do with the three intentionally incorrect patch attempts leading to the paper. Many people on HN do not seem to know this.

    [1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

  • by motohagiography on 4/21/21, 2:33 PM

    This isn't friendly pen-testing in a community, this is an attack on critical infrastructure using a university as cover. The foundation should sue the responsible profs personally and seek criminal prosecution. I remember a bunch of U.S. contractors said they did the same thing to one of the openbsd vpn library projects about 15 years ago as well.

    What this professor is proving out is that open source and (likely, other) high trust networks cannot survive really mendacious participants, but perhaps by mistake, he's showing how important it is to make very harsh and public examples of said actors and their mendacity.

    I wonder if some of these or other bug contributors have also complained that the culture of the project governance is too aggressive, that project leads can create an unsafe environment, and discourage people from contributing? If counter-intelligence prosecutors pull on this thread, I have no doubt it will lead to unravelling a much broader effort.

  • by karsinkk on 4/21/21, 3:35 PM

    Here's a clarification from the Researchers over at UMN[1].

    They claim that none of the Bogus patches were merged to the Stable code line :

    >Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

    I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.

    [1] : https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    [2] : https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

  • by karlding on 4/21/21, 8:29 PM

    The University of Minnesota's Department of Computer Science and Engineering released a statement [0] and "suspended this line of research".

    [0] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...

  • by Dumbdo on 4/21/21, 3:00 PM

    In the follow up chain it was stated that some of their patches made it to stable: https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...

    Can someone who's more invested into kernel devel find them and analyze their impact? That sounds pretty interesting to me.

    Edit: This is the patch reverting all commits from that mail domain: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

    Edit 2: Now that the first responses to the reversion are trickling in, some merged patched were indeed discovered to be malicious, like the following. Most of them seem to be fine though or at least non malicious. https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...

  • by whack on 4/21/21, 7:45 PM

    Let me play devil's advocate here. Such pen-testing is absolutely essential to the safety of our tech ecosystem. Countries like Russia, China and USA are without a doubt, doing exactly the same thing that this UMN professor is doing. Except that instead of writing a paper about it, they are going to abuse the vulnerabilities for their own nefarious purposes.

    Conducting such pen-tests, and then publishing the results openly, helps raise awareness about the need to assume-bad-faith in all OSS contributions. If some random grad student was able to successfully inject 4 vulnerabilities before finally getting caught, I shudder to think how many vulnerabilities were successfully injected, and hidden, by various nation-states. In order to better protect ourselves from cyberwarfare, we need to be far more vigilant in maintaining OSS.

    Ideally, such research projects should gain prior approval from the project maintainers. But even though they didn't, this paper is still a net-positive contribution to society, by highlighting the need to take security more seriously when accepting OSS patches.

  • by gnfargbl on 4/21/21, 10:54 AM

    From https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...,

    > A lot of these have already reached the stable trees.

    If the researchers were trying to prove that it is possible to get malicious patches into the kernel, it seems like they succeeded -- at least for an (insignificant?) period of time.

  • by Aissen on 4/21/21, 1:41 PM

    Greg does not joke around: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

        [PATCH 000/190] Revertion of all of the umn.edu commits
  • by random5634 on 4/21/21, 2:14 PM

    How does something like this get through IRB - I always felt IRB was over the top - and then they approve something like this?

    UMN looks pretty shoddy - the response from the researcher saying these were automated by a tool looks like a potential lie.

  • by kwdc on 4/21/21, 4:41 PM

    It would be fascinating to see the ethics committee exemption. I sense there was none.

    Or is this kind of experiment deemed fair game? Red vs blue team kind of thing? Penetration testing.

    But if it was me in this situation, I'd ban them for ethics violation as well. Acting like a Evil doer means you might get caught... and punished. I found the email about cease and desist particularly bad behavior. If that student was lying then that university will have to take real action. Reputation damage and all that. Surely a academic reprimand.

    I'm sure there's plenty of drama and context we don't know about.

  • by kdbg on 4/21/21, 10:35 AM

    I don't think there have been any recent comments from anyone at U.Mn. So, back when the original research (happened last year) the following clarification was offered by Qiushi Wu and Kangjie Lu which atleast paints their research in somewhat better light: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    That said the current incident seems to have gone beyond the limits of that one and is a new incident. I just thought it would be fair to include their "side"

  • by FrameworkFred on 4/21/21, 3:59 PM

    This feels like the kind of thing that "white hat" hackers have been doing forever. UMN may have introduced useful knowledge into the world in the same way some random hacker is potentially "helping" a company by pointing out that they've left a security hole exposed in their system.

    With that said, kernel developers and companies with servers on the internet are busy doing work that's important to them. This sort of thing is always an unwelcome distraction.

    And, if my neighbors walks in my door at 3 a.m. to let me know I left it unlocked, I'm going to treat them the same way UMN is getting treated in this situation. Or worse.

  • by toxik on 4/21/21, 10:52 AM

    The problem here is really that they’re wasting time of the maintainers without their approval. Any ethics board would require prior consent to this. It wouldn’t even be hard to do.
  • by gjvc on 4/21/21, 11:42 AM

    I hope USENIX et al ban this student / professor / school / university associated with this work from submitting anything to any of their conferences for 10 years.

    This was his clarification https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    ...in which they have the nerve to say that this is not considered "human research". It most definitely is, given that their attack vector is the same one many people would be keen on using for submitting legitimate requests for getting involved.

    If anything, this "research" highlights the notion that coding is but a small proportion of programming and delivery of a product, feature, or bugfix from start-to-finish is a much bigger job than many people like to admit to themselves or others.

  • by g42gregory on 4/21/21, 6:31 PM

    Reading this email exchange, I worry about the state of our education system, including computer science departments. Instead of making coherent arguments, this PhD student speaks about "preconceived biases". I loved Greg's response. The spirit of Linus lives within the Kernel! These UMN people should be nowhere near the kernel. I guess they got the answer to their research on what would happen if you keep submitting stealth malicious patches to the kernel: you will get found out and banned. Made my day.
  • by javier10e6 on 4/21/21, 2:31 PM

    The researched yielded non surprising results: Stealthy patches without a proper smoke screen to provide a veil of legitimacy will cause the the purveyor of the patches to become black listed....DUH!
  • by ansible on 4/21/21, 12:18 PM

    I still don't get the point of this "research".

    You're just testing the review ability of particular Linux kernel maintainers at a particular point in time. How does that generalize to the extent needed for it to be valid research on open source software development in general?

    You would need to run this "experiment" hundreds or thousands of times across most major open source projects.

  • by angry_octet on 4/21/21, 2:57 PM

    Research without ethics is research without value.

    Unbelievable that this could have passed ethics review, so I'd bet it was never reviewed. Big black eye for University of Minnesota. Imagine if you are another doctoral student is CS/EE and this tool has ruined your ability to participate in Linux.

  • by dsr12 on 4/21/21, 3:08 PM

    Plonk is a Usenet jargon term for adding a particular poster to one's kill file so that poster's future postings are completely ignored.

    Link: https://en.wikipedia.org/wiki/Plonk_(Usenet)

  • by qalmakka on 4/21/21, 2:21 PM

    Well, they had it coming. They abused the community's trust once in order to gain data for their research, and now it's understandable GKH has very little regard for them. Any action has consequences.
  • by segmondy on 4/21/21, 12:49 PM

    Uhhh, I just read the paper, I stopped reading when I read what I pasted below. You attempt to introduce severe security bugs into the kernel and this is your solution?

    To mitigate the risks, we make several suggestions. First, OSS projects would be suggested to update the code of conduct by adding a code like "By submitting the patch, I agree to not intend to introduce bugs."

  • by rincebrain on 4/21/21, 2:05 PM

  • by endisneigh on 4/21/21, 3:19 PM

    Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

    That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.

    I say this since the crux of what they're trying to discover is:

    1. In OSS anyone can commit.

    2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.

    3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).

  • by traveler01 on 4/21/21, 11:04 AM

    So, for "research" you're screwing around the development of one of the most widely used components in the computer world. Worse, introducing security holes that could reach production environments...

    That's a really stupid behavior ...

  • by robrtsql on 4/21/21, 2:33 PM

    Very embarrassed to see my alma mater in the news today. I was hoping these were just some grad students going rogue but it even looks like the IRB allowed this 'research' to happen.
  • by dcchambers on 4/21/21, 1:56 PM

    So I won't lie, this seems like an interesting experiment and I can understand why the professor/research students at UMN wanted to do it, but my god the collateral damage against the University is massive. Banning all contributions from a major University is no joke. I also completely understand the scorched earth response from Greg. Fascinating.
  • by jl2718 on 4/21/21, 11:38 AM

    I would check their ties to nation-state actors.

    In closed source, nobody would even check. Modern DevOps has essentially replaced manual code review with unit tests.

  • by throwawayffffas on 4/21/21, 11:10 AM

    As a user of the linux kernel, I feel legal action against the "researchers" should be pursued.
  • by ajarmst on 4/21/21, 3:57 PM

    I used to sit on a research ethics board. This absolutely would not have passed such a review. Not a 'revise and resubmit' but a hard pass accompanied with 'what the eff were you thinking?. And, yes, this should have had a REB review: testing the vulnerabilities of a system that includes people is experimenting on human subjects. Doing so without their knowledge absolutely requires a strict human subject review and these "studies" would not pass the first sniff test. I don't think it's even legal in most jurisdictions.
  • by omginternets on 4/21/21, 1:59 PM

    I did my Ph.D in cognitive neuroscience, where I conducted experiments on human subjects. Running these kinds of experiments required approval from an ethics committee, which for all their faults (and there are many), are quite good at catching this kind of shenanigans.

    Is there not some sort of equivalent in this field?

  • by cblconfederate on 4/21/21, 11:21 AM

    I guess someone had to do this unethical experiment, but otoh, what is the value here? There's a high chance someone would later find these "intentional bugs" , it's how open source works anyway. They just proved that OSS is not military-grade , but nobody thought so anyway
  • by devmunchies on 4/21/21, 3:43 PM

    Aditya: I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

    Greg: You can't quit, you're fired.

  • by neatze on 4/21/21, 2:44 PM

    Interesting, if they provided to NSF human subject research section, to me this is potential research ethics issue.

    Imagine, saying we would like to test how fire department responds to fire, by setting buildings on fire in NYC.

  • by waihtis on 4/21/21, 11:01 AM

    Should've at least sought approval from the maintainer party, and perhaps tried to orchestrate it so that the patch approver didn't have information about it, but some part of the org did.

    In a network security analogy, this is just unsolicited hacking VS being a penetration test which it claims more so to be.

  • by veltas on 4/21/21, 2:47 PM

    Regardless of whether consent (which was not given) was required, worth pointing out the emails sent to the mailing list were also intentionally misleading, or fraudulent, so some kind of ethic has obviously been violated there.
  • by forgotpwd16 on 4/21/21, 3:19 PM

    Not wanting to play the devil's advocate here but though scummy, they still successfully introduced vulnerabilities to the kernel. Suppose the paper hadn't been released or an adversary had done it. How long they'll be lingering around if they're ever removed? The paper makes a case that FOSS projects shouldn't merely trust authority for security (neither the ones submitting or the ones reviewing) but utilize tools to find potential vulnerabilities for every commit.
  • by kleiba on 4/21/21, 12:38 PM

    This is bullshit research. I mean, what they have actually found out through their experiments is that you can maliciously introduce bugs into the linux kernel. But, did anyone have doubts about this being possible prior to this "research"?

    Obviously, bugs gets introduced into all software projects all the time. And the bugs don't know whether they've been put there intentionally or accidentally. Alls bugs that ever appeared in the linux kernel obviously made it through the review process. Even when no-one actively tried to introduce them.

    So, why should it not be possible to intentionally insert bugs if it already "works" unintentionally? What is the insight gained from this innovative "research"?

  • by causality0 on 4/21/21, 11:54 AM

    I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

    Responding properly to that statement would require someone to step out of the HN community guidelines.

  • by closeparen on 4/21/21, 3:40 PM

    This is a community that thinks it’s gross negligence if something with a real name on it fails to be airgapped.

    Social shame and reputation damage may be useful defense mechanisms in general, but in a hacker culture where the right to make up arbitrarily many secret identities is a moral imperative, people who burn their identities can just get new ones. Banning or shaming is not going to work against someone with actual malicious intent.

  • by djohnston on 4/21/21, 3:21 PM

    Wow this "researcher" is a complete disaster. Who nurtures such a toxic attitude of entitlement and disregard for others time and resources? Not to mention the possible real world consequences of introducing bugs into this OS. He and his group need to be brought before an IRB.
  • by nwsm on 4/21/21, 3:39 PM

    I would say the research was a success. They found that when a bad actor submits malicious patches they are appropriately banned from the project.
  • by 1970-01-01 on 4/21/21, 5:27 PM

    So be it. Greg is a very trusted member, and has overwhelming support from the community for swinging the banhammer. We have a living kernel to maintain. Minnesota is free to fork the kernel, build their own, recreate the patch process, and send suggestions from there.
  • by jokoon on 4/21/21, 12:32 PM

    I'm pretty confident the NSA has been doing this for at least two decades, it's not a crazy enough conspiracy theory.

    Inserting backdoors in the form of bugs is not difficult. Just hijack the machine of a maintainer, insert a well placed semicolon, done!

    Do you remember the quote of Linus Torvalds ? "Given enough eye balls, all bugs are shallow." ? Do you really believe the Linux source code is being reviewed for bugs?

    By the way, how do you write tests for a kernel?

    I like open source, but security implies a lot of different problems and open source is not always better for security.

  • by protomyth on 4/21/21, 4:51 PM

    FYI The IRB for University of Minnesota https://research.umn.edu/units/irb has a Human Research Protection Program https://research.umn.edu/units/hrpp where I cannot find anything on research on people without their permission. There is a Participant's Bill of Rights https://research.umn.edu/units/hrpp/research-participants/pa... that would seem to indicate uninformed research is not allowed. I would be curious how doing research on the reactions of people to test stimulus in a non-controlled environment is not human research.
  • by aisio on 4/21/21, 4:50 PM

    One reviewers comments to a patch of theirs from 2 weeks ago

    "Plainly put, the patch demonstrates either complete lack of understanding or somebody not acting in good faith. If it's the latter[1], may I suggest the esteemed sociologists to fuck off and stop testing the reviewers with deliberately spewed excrements?"

    https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...

  • by devit on 4/21/21, 11:17 AM

    The project is interesting, but how can they be so dumb as to post these patches under an @umn.edu address instead of using a new pseudonymous identity for each patch?!?

    I mean, sneakily introducing vulnerabilities obviously only works if you don't start your messages by announcing you are one of the guys known to be trying to do so...

  • by chandmk on 4/22/21, 1:38 AM

    I am wondering if Aditya didn't respond the way he did (using corporate lawyer's langauge), Greg would have not reached to this conclusion? I am a bit surprised by the entitlement he was showing. Why would anyone use those words despite sending a nonsense patch! What kind of defence he was thinking he had among a group of seasoned developers other than being honest about intentions? I wouldn't be surprised if his professor doesn't even know what he was doing!
  • by philsnow on 4/21/21, 5:14 PM

    This seems like wanton endangerment. Kernels get baked into medical devices and never, ever updated.

    I would be livid if I found that code from these "researchers" was running in a medical device that a family member relied upon.

  • by resoluteteeth on 4/21/21, 3:03 PM

    I suspect the university will take some sort of action now that this has turned into incredibly bad press (although they really should have done something earlier).
  • by nspattak on 4/21/21, 11:33 AM

    WTF? They are experimenting with people without their consent? And they haven't been kicked out of the academic community????
  • by Metacelsus on 4/21/21, 10:47 AM

    Yikes, and what are they hoping to accomplish with this "research"?
  • by inquisitivemind on 4/21/21, 9:28 PM

    I have a question for this community:

    Insofar as this specific method of injecting flaws matches a foreign country's work done on U.S. soil - as many people in this thread have speculated - do people here think that U.S. three letter agencies (in particular NSA/CIA) should have the ability to look at whether the researchers are foreign agents/spies, even though the researchers are operating from the United States? For example, should the three letter agencies have the ability to review these researchers' private correspondence and social graphs?

    Insofar as those agencies should have this ability, then, when should they use it? If they do use it, and find that someone is a foreign agent, in what way and with whom should they share their conclusions?

  • by atleta on 4/21/21, 4:16 PM

    Now one of the problems with research in general is that negative results don't get published. While in this case it probably resolved itself automatically, if they have any ethical standards then they'll write a paper about how it ended. Something like "our assumption was that it's relatively easy to deliberately sneak in bugs into the Linux kernel but it turns out we were wrong. We managed to get our whole university banned and all former patches from all contributors from our university, including from those outside of your our research team, reversed."

    Also, while their assumption is interesting, there sure had to be an ethical and safe way to conduct this. Especially without allowing their bugs to slip into release.

  • by up2isomorphism on 4/21/21, 10:39 PM

    From an outsider, the main question is: does this expose an actual weakness in the Linux development model?

    From what I understand, this answer seems to be a "yes".

    Of course, it is understandable that GKH is frustrated, and if his community do not like someone pointing out this issue, it is OK too.

    However, one researcher does not represent the whole university, so it seems immature to vent this to other unrelated people just because you can.

  • by bloat on 4/21/21, 3:52 PM

    It's been a long time since I saw this usage of the word "plonk". Brought back some memories.

    https://en.wikipedia.org/wiki/Plonk_(Usenet)

  • by svarog-run on 4/21/21, 12:00 PM

    I feel like q lot of people here did not interpret this correctly.

    As far as it's known, garbage code was not introduced into kernel.It was caught in the review process literally on the same day.

    However, there has been merged code from the same people, which is not necessarily vulnerable. As a precaution the older commits are also being reverted, as these people have been identified as bad actors

  • by tsujp on 4/21/21, 3:15 PM

    This is categorically unethical behaviour. Attempting to get malicious code into an open source project that powers a large set of the worlds infrastructure — or even a small project — should be punished in my view. Actors are known, its been stated by the actors as intentional.

    I think the Linux Foundation should make an example of this.

  • by squarefoot on 4/21/21, 7:20 PM

    "Yesterday, I took a look on 4 accepted patches from Aditya and 3 of them added various severity security "holes"."

    Sorry for being the paranoid one here, but reading this raises a lot of warning flags.

  • by seanieb on 4/21/21, 11:00 AM

    Regardless of their methods, I think they just proved the kernel security review process is non-existent. Either in the form of static analysis or human review. Whats being done to address those issues?
  • by francoisp on 4/21/21, 3:18 PM

    I fail to see how this does not amount to vandalism of public property. https://www.shouselaw.com/ca/defense/penal-code/594/
  • by nickysielicki on 4/21/21, 7:04 PM

    UMN has some egg on their face, surely, but I think the IEEE should be equally embarrassed that they accepted this paper.
  • by icedchai on 4/21/21, 4:31 PM

    Seems like completely pointless "research." Clearly it wasted the maintainers' time, but also the "researchers" investigating something that is so obviously possible. Weren't there any real projects to work on?
  • by LordN00b on 4/21/21, 11:03 AM

    * plonk * Was a very nice touch.
  • by arkh on 4/21/21, 2:44 PM

    > I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

    Maybe not being nice is part of the immune system of open source.

  • by leeuw01 on 4/21/21, 6:28 PM

    In a follow-up [1], the author suggests: OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”

    How can one be so short-sighted?...

    [1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

  • by macspoofing on 4/21/21, 2:21 PM

    Linux maintainers should log a complaint with the University's ethics board. You can't just experiment on people without consent.
  • by pushcx on 4/21/21, 7:04 PM

    CS researchers at the University of Chicago did a similar experiment on me and other maintainers a couple years ago: https://github.com/lobsters/lobsters/issues/517

    And similarly to U Minn, their IRB covered for them: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...

    My experience felt really shitty, and I'm sorry to see I'm not alone. If anyone is organizing a broad response to redress previous abuses or prevent future abuse, I'd appreciate hearing about it, my email's on my profile.

  • by rubyn00bie on 4/21/21, 2:36 PM

    This is supremely fucked up and I’d say is borderline criminal. It’s really lucky asshole researchers like this haven’t caused a bug that cost billions of dollars, or killed someone, because eventually shit like this will... and holy shit will “it was just research” do nothing to save them.
  • by WaitWaitWha on 4/21/21, 1:57 PM

    There is so much disdain for unethical, ivory tower thinking in universities, this is not helping.

    But, allow me to pull a different thread. How liable is the professor, the IRB, and the university if there is any calamity caused by the known code?

    What is the high level difference between their action, and spreading malware intentionally?

  • by jedimastert on 4/21/21, 11:03 AM

    Out of curiosity, what would be an actually good way to poke at the pipeline like this? Just ask if they'd OK a patch w/o actually submitting it? A survey?
  • by wuxb on 4/21/21, 2:55 PM

    Sending those patches is just disgraceful. I guess they're using the edu emails so banning the university is a very effective action so someone will respond to it. Otherwise, the researchers will just quietly switch to other communities such as Apache or GNU. Who want buggy patches?
  • by devwastaken on 4/21/21, 3:00 PM

    this is not surprising to me given the quality of minnesotta universities. U of M should be banned from existence. I remember vividly how they'd break their budgets redesigning cafeterias, hiring low quality 'professors' that refused to make paper assignments digitized. (They didnt know how). Artificially inflated dorm costs without access to affordable cooking. (Meal plans only). They have bankrupted plenty of students that were forced to drop out due to their policies on mental health. It's essentially against policy to be depressed or suicidal. They predate on kids in high school who don't at all know what they're signing up for.

    Defund federal student loans. Make these universities stand on their own two feet or be replaced by something better.

  • by Taylor_OD on 4/21/21, 2:20 PM

    The professor is going to give a ted talk in about a year talking about how he got banned from open source development and the five things he learned from it.
  • by eatbitseveryday on 4/21/21, 4:22 PM

    Clarification from their work that was posted on the professor's website:

    https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

  • by kleiba on 4/21/21, 2:18 PM

    How is such a ban going to be effective? The "researchers" could easily continue their experiments using different credentials, right?
  • by NalNezumi on 4/22/21, 6:13 AM

    So the professor in center of this event, Kangjie Lu[0] is also program comitee at IEEE S&P 2021.[1]

    I'm by no means an security expert nor a kernel contributor but considering he's program comitee, is these kind of practices a common place in Security/Privacy researchers?

    Does idea/practises like this get a pass on conference publishing regularly?

    [0] https://www-users.cs.umn.edu/~kjlu/ [1] https://www.ieee-security.org/TC/SP2022/cfpapers.html

  • by sida on 4/22/21, 4:24 AM

    Let me play devil's advocate here though. This is absolutely necessary and shows the process in the kernel is vulnerable.

    Sure, this is "just" a university research project this time. And sure, this is done in bad taste.

    But there are legitimately malicious national actors (well, including the US govt and the various 3 letter agencies) that absolutely do this. And the national actors are likely even far more sophisticated than a couple of PhD students. They have the time, resources and energy to do this over a very long period of time.

    I think on the whole, this is very net positive in that it reveals the vulnerability of open source kernel development. Despite, how shitty it feels.

  • by jrm4 on 4/21/21, 3:31 PM

    Sure. And we are well past the time in which we need to develop real legal action and/or policy -- with consequences against this sort of thing.

    We have an established legal framework to do this. It's called "tort law," and we need to learn how to point it at people who negligently or maliciously create and or mess with software.

    What makes it difficult, of course, is that not only should it be pointed at jerk researchers, but anyone who works on software, provably knows the harm their actions can or do cause, and does it anyway. This describes "black hat hackers," but also quite a few "establishment" sources of software production.

  • by Pensacola on 4/21/21, 10:01 PM

    <consipracy theory>This is intentionally malicious activity conducted with a perfect cover story</conspiracy theory>
  • by kerng on 4/21/21, 6:38 PM

    Where does such "research" end... sending phishing mails to all US citizens to see how many passwords can be stolen?
  • by mrleinad on 4/22/21, 2:44 AM

  • by lamp987 on 4/21/21, 10:59 AM

    Unethical and harmful.
  • by anarticle on 4/21/21, 5:03 PM

    Ah yes, showing those highly paid linux kernel developers how broken their system of trust and connection is! Great work.

    Now if we can only find more open source developers to punish for trusting contributors!

    Enjoy your ban.

    Sorry if this comment seems off base, this research feels like a low blow to people trying to do good for a largely thankless job.

    I would say they are violating some ideas of Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

  • by LudwigNagasena on 4/21/21, 3:50 PM

    I am honestly surprised anything like this can pass the ethic committee. The reputational risk seems huge.

    For example, in economics departments there is usually a ban on lying to experiment participants. Many of them even explicitly explain to participants that this is a difference between economics and psychology experiments. The reason is that studying preferences is very important to economists, and if participants don’t believe that the experiment conditions are reliable, it will screw the research.

  • by ogre_codes on 4/21/21, 6:25 PM

    If the university was doing research then they should publish their findings on this most recent follow up experiment.

    Suggested title:

    “Linux Kernel developers found to reject nonsense patches from known bad actors”

  • by darksaints on 4/21/21, 3:31 PM

    As a side note to all of the discussion here, it would be really nice if we could find ways to take all of the incredible linux infrastructure, and repurpose it for SeL4. It is pretty scary that we've got ~30M lines of code in the kernel and the primary process we have to catch major security bugs is to rely on the experienced eyes of Greg KH or similar. They're awesome, but they're also human. It would be much better to rely on capabilities and process isolation.
  • by nemoniac on 4/21/21, 12:39 PM

    Who funds this? They acknowledge funding from the NSF but you could imagine that it would benefit some other large players to sow uncertainty and doubt about Open Source Software.
  • by DonHopkins on 4/21/21, 7:20 PM

    Shouldn't the university researchers compensate their human guinea pigs with some nice lettuce?
  • by znpy on 4/21/21, 5:29 PM

    I think it's a fair measure, albeit drastic.

    What happens if any of that patches ends up in a kernel release?

    It's like setting random houses on fire just to test the responsiveness of local firefighters.

  • by ineedasername on 4/21/21, 4:46 PM

    I don't know how their IRB approved this, although we also don't know what details the researchers gave the IRB.

    It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch.

    If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.

  • by tediousdemise on 4/21/21, 4:57 PM

    This not only erodes trust in the University of Minnesota, but also erodes trust in the Linux kernel.

    Imagine how downstream consumers of the kernel could be affected. The kernel is used for some extremely serious applications, in environments where updates are nonexistent. These bad patches could remain permanently in situ for mission-critical applications.

    The University of Minnesota should be held liable for any damages or loss of life incurred by their reckless decision making.

  • by grae_QED on 4/21/21, 7:32 PM

    This is insulting. The whole premise behind the paper is that open source developers aren't able to parse comits for malicious code. From a security standpoint, sure, I'm sure a bad actor could attempt to do this. But the fact that he tried this on the linux kernel, an almost sacred piece of software IMO, and expected it to work takes me aback. This guy either has a huge ego or knows very little about those devs.
  • by dynm on 4/21/21, 4:59 PM

    I'd be interested if there's a more ethical way to do this kind of research, that wouldn't involve actually shipping bugs to users. There certainly is some value in kind of "penetration testing" things to see how well bad actors could get away with this kind of stuff. We basically have to assume that more sophisticated actors are doing this without detection...
  • by freewizard on 4/21/21, 4:47 PM

    Using faked identity and faked papers to expose loopholes and issues in an institution is not news in science community. Kernel community may not be immune to some common challenges for any sizable institution I assume, so some ethical hacking here seems reasonable.

    However, doing it repeatedly with real names seems not helpful to the community and indicates a questionable motivation.

  • by bluenose69 on 4/22/21, 9:57 AM

    The ban seems rational, when viewed in the context of kernel development.

    The benefit is twofold: (a) it's simpler to block a whole university than it is to figure out who the individuals are and (b) this sends a message that there is some responsibility at the institutional level.

    The risk is that someone writing from that university address might have something that would be useful to the software.

    Getting patches and pull-requests accepted is not a guaranteed. And it's asking a lot of kernel developers that they check not just bad code but also for badly-intended code.

    I had a look at the research paper (https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...) and it saddens me to see such a thing coming out of a university. It's like a medical researcher introducing a disease to see whether it spreads quickly.

  • by francoisp on 4/21/21, 3:22 PM

    (I posted this on another entry that dropped out of the first page of HN? sorry for the dupe)

    I fail to see how this does not amount to vandalism of public property. https://www.shouselaw.com/ca/defense/penal-code/594/

  • by im3w1l on 4/21/21, 2:42 PM

    I can't help but think of the Sokal affair. But I'll leave the comparison to someone more knowledgeable about them both.
  • by maccard on 4/21/21, 11:25 AM

    Is there a more readable version of this available somewhere? I really struggle to follow the unformatted mailing list format.
  • by rurban on 4/21/21, 4:52 PM

    This is the big revert, a good overview of all the damage they did. Some were good, most were malicious, most author names were fantasy.

    https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

  • by djhaskin987 on 4/21/21, 7:29 PM

    Interesting tidbit from the prof's CV where he lists the paper, interpret from it what you will[1]:

    > On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits

    > Qiushi Wu, and Kangjie Lu.

    > To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

    > Note: The experiment did not introduce any bug or bug-introducing commit into OSS. It demonstrated weaknesses in the patching process in a safe way. No user was affected, and IRB exempt was issued. The experiment actually fixed three real bugs. Please see the clarifications[2].

    1: https://www-users.cs.umn.edu/~kjlu/

    2: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

  • by darau1 on 4/21/21, 12:29 PM

    So FOSS is insecure if maintainers are lazy? This would hold true for any piece of software, wouldn't it? The difference here is that even though the "hypocrite commits" /were/ accepted, they were spotted soon after. Something that might not have happened quite as quickly in a closed source project.
  • by CountSessine on 4/21/21, 6:05 PM

    I have to wonder what's going to happen to the advisor who oversaw this research. This knee-caps the whole department when conducting OS research and collaboration. If this isn't considered a big deal in the department, it should be. I certainly wouldn't pursue a graduate degree there in OS research now.
  • by kerng on 4/21/21, 6:32 PM

    What I dont get... why not ask the board of the Linux foundation if they could attempt social engineering attacks and get authorization. If Linux foundation sees value they'd approve it and who knows maybe such tests (hiring pentesters to do social engineering) are done anyway by the Linux foundation.
  • by beshrkayali on 4/21/21, 2:42 PM

    This seems like a pretty scummy way to do "research". I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low. It's not that they're doing this, I'm sure they're not the first to think of this (for research or malicious reasons), but having the gall to brag about it is a new low.
  • by johncessna on 4/21/21, 6:07 PM

    As a user of linux, I want to see this ban go further. Nothing from the University of MN, it's teaching staff, or it's current or past post-grad students.

    Once they clean out the garbage in the Comp Sci department and their research committee that approved this experiment, we can talk.

  • by mycologos on 4/21/21, 5:03 PM

    I agree with most commenters here that this crosses the line of ethical research, and I agree that the IRB dropped the ball on this.

    However, zooming out a little, I think it's kind of useful to look at this as an example of the incentives at play for a regulatory bureaucracy. Comments bemoaning such bureaucracies are pretty common on HN (myself included!), with specific examples ranging from the huge timescale of public works construction in American cities to the FDA's slow approval of COVID vaccines. A common request is: can't these regulators be a little less conservative?

    Well, this story is an example of why said regulators might avoid that -- one mistake here, and there are multiple people in this thread promising to email the UMN IRB and give them a piece of their mind. One mistake! And when one mistake gets punished with public opprobrium, it seems very rational to become conservative and reject anything close to borderline to avoid another mistake. And then we end up with the cautious bureaucracies that we like to complain about.

    Now, in a nicer world, maybe those emails complaining to the IRB would be considered valid feedback for the people working there, but unfortunately it seems plausible that it's the kind of job where the only good feedback is no feedback.

  • by Fordec on 4/21/21, 8:05 PM

    In Ireland there was a referendum to repeal the ban on abortion referendum there was very heated arguments, bot twitter accounts and general toxicity. For the sake of peoples sanity, there was a "Repeal Shield" implemented that blocked bad faith actors.

    This news makes me wish to implement my own block on the same contributors to any open source I'm involved with. At the end of the day, their ethics is their ethics. Those ethics are not Linux specific, it was just the high profile target in this instance. I would totally subscribe to or link to a group sourced file similar to a README.md or CONTRIBUTORS.md (CODERS_NON_GRATA.md?) that pulled such things.

  • by rurban on 4/21/21, 4:40 PM

    I'd really like to review now similar patches in FreeRTOS, FreeBSD and such. Their messages and fixes all follow a certain scheme, which should be easy to detect.

    At least both of them they are free from such @umn.edu commits with fantasy names.

  • by Radle on 4/21/21, 2:46 PM

    @gregkh

    These patches look like bombs under bridges to me.

    Do you believe that some open source projects should have legal protection against such actors? The Linux Kernel is pretty much a piece of infrastructure that keeps the internet going.

  • by mikaeluman on 4/21/21, 6:28 PM

    Usually I am very skeptical of "soft" subjects like the humanities; but clearly this is unethical research.

    In addition to wasting people's time, you are potentially messing with software that runs the world.

  • by fennecs on 4/22/21, 4:21 AM

    They are rightfully worried about old commits? Maybe it's time they switched to a more secure language which can more easily detect malicious code. To be honest C seems critically insecure without a whole lot of work. If a bunch of experts even struggle, seems like they need better tools. Especially since Linux is so important, and there are a lot more threats, Rust seems like a good solution.

    Apart from some perhaps critical unsafe stuff which should have a lot of attention, requiring everything to be safe/verified to some extent surely is the answer.

  • by largehotcoffee on 4/21/21, 5:39 PM

    This was absolutely the right move. Smells really fishy given the history. I imagine this is happening in other parts of the community (attempting to add malicious code), albeit under a different context.
  • by ficiek on 4/21/21, 12:55 PM

    Is introducing bugs into computer systems on purpose like this in some way illegal in the USA? I understand that Linux is run by a ton of government agencies as well, would they take interest in this?
  • by gjvc on 4/21/21, 7:51 PM

  • by wolverine876 on 4/21/21, 6:21 PM

    I don't see the difference between these and other 'hackers', white-hat, black-hat etc. The difference I see is the institution tested, Linux, is beloved here.

    Usually people are admired here for finding vulnerabilities in all sorts of systems and processes. For example, when someone submits a false paper to a peer-reviewed journal, people around here root for them; I don't see complaints about wasting the time and violating the trust of the journal.

    But should one of our beloved institutions be tested - now it's an outrage?

  • by nullc on 4/22/21, 6:40 AM

    CS department security research is near universally not held to be in the scope of IRBs. This isn't entirely bad: the IRB process that projects are subjected to is so broken that it would be a sin to bring that mess on any other things.

    But it means the regularly 'security' research does ethically questionable stuff.

    IRBs exist because of legal risk. If parties harmed by unethical computer science research do not litigate (or bring criminal complaints, as applicable) the university practices will not substantially change.

  • by Luker88 on 4/21/21, 3:36 PM

    The discussion points link to the github of the research

    https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...

    It has yet to be published (due next month)

    How about opening few bug reports to correctly report the final response of the community and the actual impact?

    Not asking to harass them: if anyone should do it, it would be the kernel devs, and I'm not one of them

  • by fellellor on 4/21/21, 4:56 PM

    What an effing idiot! And then turn around and claiming bullying! At this point I’m not even surprised. Claiming victimhood is now a very effective move in the US academia these days.
  • by GRBurst on 4/21/21, 10:24 PM

    Actually I do understand BOTH sides, BUT:

    The way the university did this tests and the reactions afterwards are just bad.

    What I see here and what the Uni of Minnesota seem to neglected is: 1. Financial damage (time is wasted) 2. Ethical reasons of experimenting with human beings

    As a result, the University should give a clear statement on both and should donate a generous amount of on money for compensation of (1.)

    For part (2.), a simple bit honest apology can do wonders!

    ---

    Having said that, I think there are other and ethically better ways to achieve these measurement.

  • by AshamedCaptain on 4/21/21, 5:31 PM

    Researcher sends bogus papers to journal/conference, gets them reviewed and approved, uses that to point how ridiculous the review process of the journal is => GREAT JOB, PEER REVIEW SUCKS!

    Researcher sends bogus patches to bazaar-style project, gets them reviewed and approved, uses that to point how ridiculous the review process of the project is => DON'T DO THAT! BAD RESEARCHER, BAD!

  • by jokoon on 4/22/21, 1:43 PM

    I'm not surprised.

    I'm repeating myself, but I'm pretty certain the NSA or other intel agencies (Israel, especially, considering their netsec expertise) have already done it in one way or another.

    Do you remember the semicolon that caused a big wifi vuln? Hard to really know if it was just a mistake.

    I'm going full paranoiac here, but anyway.

    You can also imagine the NSA submitting patches to the windows source code, without the knowledge of microsoft, and so many other similar scenarios (android, apple, etc)

  • by mabbo on 4/21/21, 3:12 PM

    I think Greg KH would have been wise to add a time limit on this ban. Make it a 10-year block, for example, rather than one with no specific end-date.

    Imagine what happens 25 years from now as some ground-breaking security research is being done at Minnesota, and they all groan: "Right, shoot, back in 2021 some dumb prof got us banned forever from submitting patches".

    Is there a mechanism for University of Minnesota to appeal, someday? Even murders have parole hearings, eventually.

  • by theflyinghorse on 4/21/21, 3:12 PM

    "It's just a prank, bro!"

    Incredible that the university researches decided this was a good idea. Has noone in the university voiced concern that perhaps this is a bad idea?

  • by psim1 on 4/21/21, 3:42 PM

    UMN is still sore that http took off and gopher didn't.
  • by ddingus on 4/21/21, 4:07 PM

    plonk

    Aaaaand into the kill file they go.

    Been a while since I last saw a proper plonk.

  • by kevinventullo on 4/21/21, 4:26 PM

    Here’s a (perhaps naively) optimistic take: by publishing this research and showing it to lawmakers and industry leaders, it will sound alarms on a serious vulnerability in what is critical infrastructure for much of the tech industry and public sector. This could then lead to investment in mitigations for the vulnerability, e.g. directly funding work to proactively improve security issues in the kernel.
  • by charonn0 on 4/21/21, 3:41 PM

    It seems like this debacle has created a lot of extra work for the kernel maintainers. Perhaps they should ask the university to compensate them.
  • by davidkuhta on 4/21/21, 3:26 PM

    I think the root of the problem can be traced back to the researcher's erroneous claim that "This was not human research".
  • by LanceH on 4/21/21, 12:53 PM

    Committing a non-volunteer of your experiment to work, and attempting to destroy their product of their work surely isn't ethical research.
  • by bigbillheck on 4/21/21, 12:31 PM

    So how does this differ from the Sokal hoax thing?
  • by skerit on 4/22/21, 8:58 AM

    And yesterday there was another bit of Linux news by Greg KH trending on Reddit. Nice to see him stepping into the spotlight more :)
  • by alkonaut on 4/21/21, 8:13 PM

    If you really wanted to research how to get malicious code into the highest-profile projects like Linux, the social engineering bit would be the most

    Whether some unknown contributor can submit a bad patch isn't so interesting for this type of project. Knowing the payouts for exploits, the question is: how much money would one bad reviewer want to let one past?

  • by kemonocode on 4/21/21, 5:13 PM

    I have to question the true motivations behind this. Just a "mere" research paper? Or is it there an ulterior motive, such as undermining Linux kernel development, taking advantage of the perceived hostility of the LKML to make a big show of it; castigate and denounce those elitist Linux kernel devs?

    So I hear tinfoil is on sale, mayhaps I should stock up.

  • by GNOMES on 4/21/21, 3:46 PM

    Am I missing how these patches were caught/flagged? Was it an automated process or physically looking at the pull requests?
  • by qwertox on 4/21/21, 4:10 PM

    How is this any different to littering in order to research if it gets cleaned up properly? Or like dumping hard objects onto a highway to research if they cause harm before authorities notice it?

    I mean, the Kernel is now starting to run in cars and even on Mars, and getting those bugs into stable is definitely no achievement one should be proud of.

  • by fefe23 on 4/21/21, 3:13 PM

    Reminds me of the Tuskegee Symphilis Study.

    Sure we infected you with Syphilis without asking for permission first, but we did it for science!

  • by emeraldd on 4/21/21, 10:47 PM

    Is there a readable version of the message Greg was replying to https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... ? Or was there more to it that what Greg quoted?
  • by rwoerz on 4/22/21, 6:01 AM

    So, next paper would be like "On the Effectiveness of Using Email Domain Names for Kernel Submission Bans"
  • by mryalamanchi on 4/21/21, 12:52 PM

    They just wasted the community's time. No wonder Linus Trovalds goes batshit crazy on these kind of people!
  • by hzzhang on 4/21/21, 6:18 PM

    This type of research just looks like: let’s prove people will die if being killed, by really killing someone.
  • by thayne on 4/21/21, 5:52 PM

    After they successfully got buggy patches in, did they submit patches to fix the bugs? And were they careful to make sure their buggy patches didn't make it into stable releases? If not, then they risked causing real damage, and is at least toeing the line of being genuinely malicious.
  • by mosselman on 4/21/21, 2:56 PM

    The tone of Aditya Pakki's message makes me think they would be very well served by reading 'How to Win Friends & Influence People' by Dale Carnegie.

    This is obviously the complete opposite of how you should be communicating with someone in most situations let alone when you want something from them.

    I have sure been there though so if anything, take this as a book recommendation for 'How to Win Friends & Influence People'.

  • by spinny on 4/21/21, 9:08 PM

    Are they legally liable in any way for including deliberate flaws in a piece of software they know is widely used and therefore creating a surface attack surface for _any_ attacker with the skill to so do and putting private and public infrastructure at risk ?
  • by shiyoon on 4/21/21, 4:09 PM

    https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    Seemed to have posted some clarifications around this. worth a read

  • by matheusmoreira on 4/21/21, 4:43 PM

    It's okay to run experiments on humans without their explicit informed consent now?
  • by yosito on 4/21/21, 11:19 AM

    Can someone explain what the kernel bugs were that were introduced, in general terms?
  • by honeybutt on 4/21/21, 5:55 PM

    Very unethical and extremely inconsiderate of the maintainers time to say the least.
  • by kml on 4/21/21, 4:55 PM

    Aditya Pakki should be banned from any open source projects. Open source depends on contributors who collectively try to do the right thing. People who purposely try to veer projects off course should face real consequences.
  • by stakkur on 4/21/21, 3:19 PM

    When you test in production...
  • by cmclaughlin on 4/23/21, 4:12 AM

    What a waste of talent... these kids know how to program, but instead of working on useful projects they’re wasting everyone’s time. It’s really troubling that any professor would have proposed or OK’d this.
  • by booleandilemma on 4/21/21, 3:02 PM

    The UMN had worked on a research paper dubbed "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits".

    I guess it's not as feasible as they thought.

  • by satai on 4/22/21, 7:33 AM

    Let’s add to the question “what is the quality of code review process in Linux?” an other one “what is the quality of ethical review process at universities?”.

    I think there should be a real world experiment to test it.

  • by dghlsakjg on 4/21/21, 3:47 PM

    Like all research institutions, University of Minnesota has an ethics committee.

    https://integrity.umn.edu/ethics

    Feel free to write to them

  • by dumbDev on 4/21/21, 5:08 PM

    What is this? A "science" way of saying it's a prank bro?
  • by jvanderbot on 4/21/21, 7:55 PM

    The most recent possible-double-free was from a bad static analyzer wasn't it? That could have been a good-faith commit, which is unfortunate given the deliberate bad-faith commits prior.
  • by jc2it on 4/29/21, 3:43 PM

    After reading many of the comments I agree with the decision to ban the University. Why? You are free to choose your actions. You are not free to choose the consequences of your actions.
  • by amarant on 4/22/21, 6:16 AM

    I've been thinking, what would happen if someone intentionally hacked a university and erased all data from all their computer systems, and then lied to their faces about it?

    New white paper due soon

  • by omar12 on 4/21/21, 4:57 PM

    This raises the question: "has there been state-sponsored efforts to overwhelm open source maintainers with the intent of sneaking in vulnerabilities to software applications?"
  • by ineedasername on 4/22/21, 12:05 AM

    "We'd like to insert malicious code into the software that runs countless millions of computers and see if they figure it out"

    I don't think this was the pitch they gave to their IRB.

  • by dboreham on 4/21/21, 3:55 PM

    The replies here have been fascinating to read. Yes it's bad that subterfuge was engaged in vs kernel devs. But don't the many comments here expressing outrage at the actions of these researchers sound exactly like the kind of outrage commonly expressed by those in power when their misdeeds are exposed? e.g. Republican politicians outraged at a "leaker" who has leaked details of their illegal activity. It honestly looks to me like the tables have been turned here. Surely the fact that the commonly touted security advantages of OSS have been shown to be potentially fictitious, is at least as worrying as the researchers' ethics breaches?
  • by jbirer on 4/22/21, 12:55 AM

    I am baffled by the immaturity and carelessness of experimenting on a kernel that millions of critical machines use, and I applaud the maintainers for dealing swiftly with this.
  • by MR4D on 4/21/21, 4:14 PM

    Looks like vandalism masquerading as “research”.

    Greg’s response is totally right.

  • by redmattred on 4/21/21, 6:00 PM

    I thought there were ethical standards for research where a good study should not knowingly do harm or at the very least make those involved aware of their participation
  • by devillius on 4/21/21, 12:30 PM

    An appropriate place to make a report: https://compliance.umn.edu/
  • by wglb on 4/21/21, 4:55 PM

    While it is easy to consider this a unsportsmanlike, one might view this as a supply chain attack. I don't particularly support this approach, but consider for a moment that as a defender (in the security team sense), you need to be aware of all possible modes of attack and compromise. While the motives of this class are clear, ascribing to attackers any particular motive is likely to miss.

    To the supply chain type of attacks, there isn't an easy answer. Classical methods left both the SolarWinds and Codecov attacks in place for way too many days.

  • by dumpsterdiver on 4/21/21, 5:39 PM

    Could someone clarify: this made it to the stable branch, so does that mean that it made it out into the wild? Is there action required here?
  • by autoconfig on 4/21/21, 12:44 PM

    A lot of people seem to consider this meaningless and a waste of time. If we disregard the the problems with the patches reaching stable branches for a second (which clearly is problematic), what is the difference between this and companies conducting red team exercises? It seems to me a potentially real and dangerous attack vector has been put under the spotlight here. Increasing awareness around this can't be all bad, particularly in a time where state sponsored cyber attacks are getting ever more severe.
  • by BTCOG on 4/21/21, 4:09 PM

    Now I'm not one for cancel culture, but fuck these guys. Put their fuckin' names out there to get blackballed. Bunch of clowns.
  • by xmly on 4/26/21, 9:28 PM

    So they A/B tested the kernel maintainers and got banned. What about the kernel security? Is the patch process getting improved?
  • by soheil on 4/22/21, 12:30 AM

    Is getting reactions from HN also part of their experiment and should we expect our comments to be written about in their paper?
  • by dawnbreez on 4/21/21, 4:54 PM

    logged into my ancient hn account just to tell all of you that pentesting without permission from higher-ups is a bad idea

    yes, this is pentesting

  • by uglygoblin on 4/21/21, 1:47 PM

    If the researchers desired outcome is more vigilance during patches and contributions I guess they might achieve that outcome?
  • by liendolucas on 4/21/21, 1:36 PM

    Could have this happened also on other open source projects like FreeBSD, OpenBSD, etc or other popular open source software?
  • by francoisp on 4/21/21, 2:51 PM

    Me thinks that If you hold a degree from the University of Minnesota it would be a good idea to let your university know what you think of this.
  • by duerra on 4/21/21, 12:09 PM

    I'll give you one guess nation states do.
  • by beprogrammed on 4/22/21, 4:19 AM

    Well we get to look at the real results of this in realtime, as they get there whole organization banned from the kernel.
  • by kome on 4/21/21, 11:43 AM

    Does the University of Minnesota have an ethical review board or research ethics board? They need to be contacted ASAP.
  • by HelloNurse on 4/21/21, 3:24 PM

    They seem to be teaching social engineering. Using a young, possibly foreign student as a front is a classy touch.
  • by freewilly1040 on 4/21/21, 7:56 PM

    Is there some tool that provides a nicer view of these types of threads? I find them hard to navigate and read.
  • by limaoscarjuliet on 4/21/21, 6:09 PM

    To me it was akin to spotting volunteers cleaning up streets and, right after they passed, dumping more trash on the same street to see if they come and clean it up again. Low blow if you ask me.
  • by soheil on 4/22/21, 12:24 AM

    Experiment: let's blow up the world to find out who might stop us so we can write a paper about it.
  • by lfc07 on 4/21/21, 12:24 PM

    Their research could have been an advisory email or a blogpost for the maintainers without the nasty experiments. If they really cared for OSS they would have have collaborated with the maintainers and persuaded them to use their software tools for patch work. There is research for good of all and there is research for selfish gains. I am convinced this is the later.
  • by moron4hire on 4/21/21, 11:56 AM

    It's funny. When someone like RMS or ESR or (formerly) Torvalds is "disrespectful" to open source maintainers, this is called "tough love", but when someone else does it, it's screamed about like it's some kind of high crime, with calls to permanently cancel access for all people even loosely related to the original offender.
  • by davidkuhta on 4/21/21, 7:00 PM

    Anyone else find the claim that "This was not human research" as erroneous as I do?
  • by CTDOCodebases on 4/21/21, 2:57 PM

    Fair. You are either part of the solution, part of the problem or just part of the landscape.
  • by 8bitsrule on 4/22/21, 1:16 AM

    Couldn't help themselves. Once they thought of it, they just had to Gopher it.
  • by coward76 on 4/21/21, 2:40 PM

    Make an ethics complaint with the state and get their certification and charter pulled.
  • by soheil on 4/21/21, 4:55 PM

    First thing that comes to mind is The Underhanded C Contest [0] where contestants try to introduce code that looks harmless, but actually is malicious and even if caught should look like an innocent bug at worse.

    [0] http://www.underhanded-c.org

  • by pertymcpert on 4/21/21, 9:18 PM

    I want to know how TF the PC at the IEEE conference decided this was acceptable?
  • by nitinreddy88 on 4/21/21, 2:30 PM

    Can anyone enlighten me why these were not caught in review process itself?
  • by enz on 4/21/21, 3:10 PM

    I wonder if they can be sued (by the Linux Foundation, maybe) for that...
  • by Apofis on 4/21/21, 3:23 PM

    Minnesota being Minnesota.
  • by birdyrooster on 4/23/21, 7:46 PM

    Straight up grift. If it looks like a duck, quacks like a duck...
  • by shiyoon on 4/21/21, 4:09 PM

    https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    posted some clarifications around this, worth a read

  • by ilamont on 4/21/21, 5:28 PM

    Reminded me of story more than a decade ago about an academic who conducted a series of "breaching experiments" in City of Heroes/City of Villains to study group behavior, basically breaking the social rules (but not the game rules) without other participants' or the game studio's knowledge. It was discussed on HN in 2009 (https://news.ycombinator.com/item?id=690551)

    Here's how the professor (a sociologist) described his methodology:

    These three sets of behaviors – rigidly competitive pvp tactics (e. g., droning), steadfastly uncooperative social play outside the game context (e. g., refusing to cooperate with zone farmers), and steadfastly uncooperative social play within the game context (e. g., playing solo and refusing team invitations) – marked Twixt’s play from the play of all others within RV.

    Translation: He killed other players in situations that were allowed by the game's creators but frowned upon by the majority of real-life participants. For instance, "villains" and "heroes" aren't supposed to fraternize, but they do anyway. When "Twixt" happened upon these and other situations -- such as players building points by taking on easy missions against computer-generated enemies -- he would ruin them, often by "teleporting" players into unwinnable killzones. The other players would either die or have their social relations disrupted. Further, "Twixt" would rub it in by posting messages like:

    Yay, heroes. Go good team. Vills lose again.

    The reaction to the experiment and to the paper was what you would expect. The author later said it wasn't an experiment in the academic sense, claiming:

    ... this study is not really an experiment. I label it as a “breaching experiment” in reference to analogous methods of Garfinkel, but, in fact, neither his nor my methods are experimental in any truly scientific sense. This should be obvious in that experimental methods require some sort of control group and there was none in this case. Likewise, experimental methods are characterized by the manipulation of a treatment variable and, likewise, there was none in this case.

    Links:

    http://www.nola.com/news/index.ssf/2009/07/loyola_university...

    https://www.ilamont.com/2009/07/academic-gets-rise-from-brea...

  • by sadfev on 4/21/21, 8:23 PM

    Dang, I am not sure how to feel about this kind of “research”
  • by werber on 4/21/21, 10:07 PM

    Could this have just been someone trying to cover up being a mediocre programmer in academia by framing it in a lens that would work in the academy with some nonsense vaguely liberal arts sounding social experiment premise?
  • by metalliqaz on 4/21/21, 12:27 PM

    Wow, shocking and completely unethical by that professor.
  • by ne38 on 4/21/21, 8:37 PM

    It is not done for research purpose. NSA is behind them
  • by iou on 4/21/21, 10:42 PM

    Did Linus comment on any of this get? :popcorn:
  • by shadowgovt on 4/21/21, 4:26 PM

    Is banning an entire university's domain from submitting to a project due to the actions of a few of its members an example of cancel culture?
  • by LegitShady on 4/21/21, 2:32 PM

    They should be reported to the authorities for attempting to introduce security vulnerabilities into software intentionally. This is not ok.
  • by balozi on 4/21/21, 4:02 PM

    Uff da! I really do hope the administrators at University of Minnesota truly understand the gravity of this F* up. I doubt they will though.
  • by TacticalCoder on 4/21/21, 12:18 PM

    Or some enemy state pawn(s) trying to add backdoors and then use the excuse of "university research paper" should they get caught?
  • by gumby on 4/21/21, 3:34 PM

    This is the kind of study (unusual for CS) that requires IRB approval. I wonder if they thought to seek approval, and if they received it?
  • by crazypython on 4/21/21, 12:30 PM

    Trust is currency. Trust is an asset.
  • by francoisp on 4/21/21, 8:02 PM

    those that can't do teach, and those that can't teach troll open source devs?
  • by readme on 4/21/21, 11:42 AM

    these people have no ethics
  • by nabla9 on 4/21/21, 1:16 PM

    If it was up to me, I would

    1) send ethics complaint to the University of Minnesota, and

    2) report this to FBI cyber crime division.

  • by jcun4128 on 4/21/21, 5:19 PM

    huh I never knew of plonk I bet I've been plonked before
  • by foolfoolz on 4/21/21, 2:44 PM

    how can i see these prs?
  • by brundolf on 4/21/21, 9:29 PM

    What a bizarre saga.
  • by a-dub on 4/21/21, 9:18 PM

    so basically they demonstrated that the oss security model, as it operates today, is not working as it had been previously hoped.

    it's good work and i'm glad they've done it, but that's depressing.

    now what?

  • by devpbrilius on 4/21/21, 3:41 PM

    Weirdly enough
  • by dt123 on 4/22/21, 11:07 AM

    cannot wait for Rust in the kernel..
  • by ElectricMind on 4/22/21, 10:49 AM

    Will he get job/work somewhere again?
  • by arua442 on 4/21/21, 1:47 PM

    Disgusting.
  • by treesknees on 4/21/21, 2:06 PM

    The full title is "Linux bans University of Minnesota for sending buggy patches in the name of research" and it seems to justify the ban. It's not as though these students were just bad programmers, they were intentionally introducing bugs, performing unethical experimentation on volunteers and members of another organization without their consent.

    Unfortunately even if the latest submissions were sent with good intentions and have nothing to do with the bug research, the University has certainly lost the trust of the kernel maintainers.

  • by WrtCdEvrydy on 4/21/21, 2:21 PM

    I just want you to know that this is extremely unethical to create a paper where you attempt to discredit others by just using your university's reputation to try to create vulnerabilities on purpose.

    I back your decision and fuck these people. I will additionally be sending a strongly worded email to this person, their advisor and their whoever's in charge of this joke of a computer science school. Sometimes I wish we had the ABA equivalent for computer science.

  • by TedShiller on 4/21/21, 9:09 PM

    TLDR?
  • by mort96 on 4/21/21, 2:46 PM

    The previous discussion seems to have suddenly disappeared from the front page:

    https://news.ycombinator.com/item?id=26887670

  • by atleta on 4/21/21, 4:10 PM

    It's already being discussed on HN [1] but for some reason it's down to the 3rd page despite having ~1200 upvotes at the moment and ~600 comments, including from Greg KH. (And the submission is only 5 hours old.)

    [1] https://news.ycombinator.com/item?id=26887670

  • by donatj on 4/21/21, 2:09 PM

    I wish the title were clearer. Linux bans University of Minnesota for sending buggy patches on purpose.
  • by WrtCdEvrydy on 4/21/21, 2:22 PM

    Yes, and robbing a bank to show that the security is lax is totally fine because the real criminals don't notify you before they rob a bank.

    Do you understand how dumb that sounds?

  • by kingsuper20 on 4/21/21, 1:16 PM

    Since there is bound to be a sort of trust hierarchy in these commits, is it possible that bonafide name-brand university people/email addresses come with an imprimatur that has now been damaged generally?

    Given the size and complexity of the Linux (/GNU) codeworld, I have to wonder if they are coming up against (or already did) the practical limits of assuring safety and quality using the current model of development.

  • by PHDchump on 4/21/21, 4:38 PM

    lol this is also how Russia does their research with Solarwinds. Do not try to attack supply chain or do security research without permission. They should be investigated by FBI for doing recon to a supply chain to make sure they weren't trying to do something worse. Minnesota leads the way in USA embarrassment once again.
  • by b0rsuk on 4/21/21, 3:34 PM

    Think of potential downstream effects of a vulnerable patch being introduced into Linux kernel. Buggy software in mobile devices, servers, street lights... this is like someone introducing a bug into university grading system.

    Someone should look into who sponsored this research. Was there a state agent?

  • by calylex on 4/22/21, 12:39 AM

    Reminds me of "It's just a prank bro" video from Filthy Frank https://www.youtube.com/watch?v=_wldE_4xjVQ
  • by jtdev on 4/21/21, 6:18 PM

    University of Minnesota is involved with the Confucius Institute... what could go wrong when a U.S. university accepts significant funding from a hostile foreign power?

    https://experts.umn.edu/en/organisations/confucius-institute

  • by knz_ on 4/21/21, 3:36 PM

    The bad actors here should be expelled and deported. The nationalities involved make it clear this is likely a backfired foreign intelligence operation and not just 'research'.

    They were almost certainly expecting an obvious bad patch to be reverted while trying to sneak by a less obvious one.

  • by mnouquet on 4/21/21, 5:22 PM

    In other news: the three little pigs ban wolves after wolves exposed the dubious engineering of the straw house by blowing on it for a research paper.
  • by unanswered on 4/21/21, 3:43 PM

    I am concerned that the kernel maintainers might be falling into another trap: it is possible that some patches were designed such that they are legitimate fixes, and moreover such that reverting them amounts to introducing a difficult-to-detect malicious bug.

    Maybe I'm just too cynical and paranoid though.

  • by unanswered on 4/21/21, 3:31 PM

    Presumably the next step is an attempt to cancel the kernel maintainers on account of some politically powerful - oops, I mean, some politically protected characteristics of the researchers.
  • by shadowgovt on 4/21/21, 3:09 PM

    Academic reputation has always mattered, but I can't recall the last time I've seen an example as stark as "I attend a university that is forbidden from submitting patches to the Linux kernel."
  • by andi999 on 4/21/21, 3:09 PM

    Somebody should have told them that since microsoft is now pro-open source this wouldnt land any of them a cushy position after the blowup at uni.
  • by Quarrelsome on 4/21/21, 10:55 AM

    This is ridiculously unethical research. Despite the positive underlying reasons treating someone as a lab rat (in this case maintainers reviewing PRs) feels almost sociopathic.
  • by incrudible on 4/21/21, 2:07 PM

    From an infosec perspective, I think this is a knee-jerk response to someone attempting a penetration test in good faith and failing.

    The system appears to have worked, so that's good news for Linux. On the other hand, now that the university has been banned, they won't be able to find holes in the process that may remain, that's bad news for Linux.

  • by mfringel on 4/21/21, 3:18 PM

    When James O' Keefe tries to run a fake witness scam on the Washington Post, and the newspaper successfully detects it, the community responds with "Well played!"

    When a university submits intentionally buggy patches to the Linux Kernel, and the maintainers successfully detect it, the community responds with "That was an incredibly scummy thing to do."

    I sense a teachable moment, here.

  • by InsomniacL on 4/21/21, 11:00 AM

    Seems to me they exposed a vulnerability in the way code is contributed.

    If this was Facebook and their response was: > ~"stop wasting our time" > ~"we'll report you" the responses here would be very different.

  • by returningfory2 on 4/21/21, 2:59 PM

    Commenters have been reasonably accusing the researchers of bad practice, but I think there's another possible take here based on Hanlon's razor: "never attribute to malice that which is adequately explained by stupidity".

    If you look at the website of the PhD student involved [1], they seem to be writing mostly legitimate papers about, for example, using static analysis to find bugs. In this kind of research, having a good reputation in the kernel community is probably pretty valuable because it allows you to develop and apply research to the kernel and get some publications/publicity out of that.

    But now, by participating in this separate unethical research about OSS process, they've damaged their professional reputation and probably setback their career somewhat. In this interpretation, their other changes were made in good faith, but now have been tainted by the controversial paper.

    [1] https://qiushiwu.github.io/

  • by tester34 on 4/21/21, 11:19 AM

    Researcher(s) shows that it's relatively not hard to introduce bugs in kernel

    HN: let's hate researcher(s) instead of process

    Wow.

    Assume good faith, I guess?

  • by duxup on 4/21/21, 2:49 PM

    I don't like this university ban approach.

    Universities are places with lots of different students, professors, and different people with different ideas, and inevitably people who make bad choices.

    Universities don't often act with a single purpose or intent. That's what makes them interesting. Prone to failure and bad ideas, but also new ideas that you can't do at corporate HQ because you've got a CEO breathing down your neck.

    At the University of Minnesota there's 50k+ students at the Twin Cities campus alone, 3k plus instructors. Even more at other University of Minnesota campuses.

    None of those people did anything wrong. Putting the onus on them to effect change to me seems unfair. The people banned didn't do anything wrong.

    Now the kernel doesn't 'need' any of their contributions, but I think this is a bad method / standard to set to penalize / discourage everyone under an umbrella when they've taken no bad actions themselves.

    Although I can't put my finger on why, this ban on whole swaths of people in some ways seems very not open source.

    The folks who did the thing were wrong to do so, but the vast majority of people now impacted by this ban didn't do the thing.

  • by perfunctory on 4/21/21, 11:22 AM

    I don't quite understand the outrage. Quite sure most HN readers were doing/involved in similar experiments one way or another. Isn't A/B testing an experiment on consumers (people) without their consent?
  • by ilammy on 4/21/21, 11:34 AM

    So many comments here refrain, “They should have asked for consent first”. But would not that be detrimental to the research subject? Specifically, stealthily introducing security vulnerabilities. How should a consent request look to preserve the surprise factor? A university approaches you and says, “Would it be okay for us to submit some patches with vulnerabilities for review, and you try and guess which ones are good and which ones have bugs?” Of course you would be extra careful when reviewing those specific patches. But real malicious actors would be so kind and ethical as to announce their intentions beforehand.
  • by noxer on 4/21/21, 11:28 AM

    Someone does voluntary work and people think that gives them some ethical privilege to be asked before someone puts their work to the test? Sure it would be nice to ask but at the same time it renders the testing useless. They wanted to see how the review goes if they aren't aware that someone is testing them. You cant do this with consent.

    The wasting time argument is nonsense too its not like they did this thousands of times and beside that, reviewing a intentional bad code is not wasting time is just as productive as reviewing "good" code and together with the patch-patch it should be even more valuable work. It not only or adds a patch it also make the reviewer better.

    Yeah it aint fun if people trick you or point out you did not succeed in what you tried to do. But instead of playing the victim an play the unethical human experiment card maybe focus on improving.