by kdbg on 4/21/21, 10:32 AM with 1912 comments
by dang on 4/21/21, 7:00 PM
https://news.ycombinator.com/item?id=26887670&p=2
https://news.ycombinator.com/item?id=26887670&p=3
https://news.ycombinator.com/item?id=26887670&p=4
https://news.ycombinator.com/item?id=26887670&p=5
https://news.ycombinator.com/item?id=26887670&p=6
https://news.ycombinator.com/item?id=26887670&p=7
(Posts like this will go away once we turn off pagination. It's a workaround for performance, which we're working on fixing.)
Also, https://www.neowin.net/news/linux-bans-university-of-minneso... gives a bit of an overview. (It was posted at https://news.ycombinator.com/item?id=26889677, but we've merged that thread hither.)
Edit: related ongoing thread: UMN CS&E Statement on Linux Kernel Research - https://news.ycombinator.com/item?id=26895510 - April 2021 (205 comments and counting)
by rzwitserloot on 4/21/21, 12:28 PM
"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".
That'll be a fun paper to write, no doubt.
Additional context:
* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].
Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].
The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.
As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?
* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]
[1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2
[2]: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
[3]: https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...
by ENOTTY on 4/21/21, 10:55 AM
> Because of this, I will now have to ban all future contributions from your University.
Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.
EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.
In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].
Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.
[1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22
[2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...
by ltfish on 4/21/21, 3:03 PM
- Aditya Pakki (the author who sent the new round of seemingly bogus patches) is not involved in the S&P 2021 research. This means Aditya is likely to have nothing to do with the prior round of patching attempts that led to the S&P 2021 paper.
- According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.
Greg has all reasons to be unhappy since they were unknowingly experimented on and used as lab rats. However, the round of patches that triggered his anger *are very likely* to have nothing to do with the three intentionally incorrect patch attempts leading to the paper. Many people on HN do not seem to know this.
[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
by motohagiography on 4/21/21, 2:33 PM
What this professor is proving out is that open source and (likely, other) high trust networks cannot survive really mendacious participants, but perhaps by mistake, he's showing how important it is to make very harsh and public examples of said actors and their mendacity.
I wonder if some of these or other bug contributors have also complained that the culture of the project governance is too aggressive, that project leads can create an unsafe environment, and discourage people from contributing? If counter-intelligence prosecutors pull on this thread, I have no doubt it will lead to unravelling a much broader effort.
by karsinkk on 4/21/21, 3:35 PM
They claim that none of the Bogus patches were merged to the Stable code line :
>Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.
[1] : https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
[2] : https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
by karlding on 4/21/21, 8:29 PM
[0] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...
by Dumbdo on 4/21/21, 3:00 PM
Can someone who's more invested into kernel devel find them and analyze their impact? That sounds pretty interesting to me.
Edit: This is the patch reverting all commits from that mail domain: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
Edit 2: Now that the first responses to the reversion are trickling in, some merged patched were indeed discovered to be malicious, like the following. Most of them seem to be fine though or at least non malicious. https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
by whack on 4/21/21, 7:45 PM
Conducting such pen-tests, and then publishing the results openly, helps raise awareness about the need to assume-bad-faith in all OSS contributions. If some random grad student was able to successfully inject 4 vulnerabilities before finally getting caught, I shudder to think how many vulnerabilities were successfully injected, and hidden, by various nation-states. In order to better protect ourselves from cyberwarfare, we need to be far more vigilant in maintaining OSS.
Ideally, such research projects should gain prior approval from the project maintainers. But even though they didn't, this paper is still a net-positive contribution to society, by highlighting the need to take security more seriously when accepting OSS patches.
by gnfargbl on 4/21/21, 10:54 AM
> A lot of these have already reached the stable trees.
If the researchers were trying to prove that it is possible to get malicious patches into the kernel, it seems like they succeeded -- at least for an (insignificant?) period of time.
by Aissen on 4/21/21, 1:41 PM
[PATCH 000/190] Revertion of all of the umn.edu commits
by random5634 on 4/21/21, 2:14 PM
UMN looks pretty shoddy - the response from the researcher saying these were automated by a tool looks like a potential lie.
by kwdc on 4/21/21, 4:41 PM
Or is this kind of experiment deemed fair game? Red vs blue team kind of thing? Penetration testing.
But if it was me in this situation, I'd ban them for ethics violation as well. Acting like a Evil doer means you might get caught... and punished. I found the email about cease and desist particularly bad behavior. If that student was lying then that university will have to take real action. Reputation damage and all that. Surely a academic reprimand.
I'm sure there's plenty of drama and context we don't know about.
by kdbg on 4/21/21, 10:35 AM
That said the current incident seems to have gone beyond the limits of that one and is a new incident. I just thought it would be fair to include their "side"
by FrameworkFred on 4/21/21, 3:59 PM
With that said, kernel developers and companies with servers on the internet are busy doing work that's important to them. This sort of thing is always an unwelcome distraction.
And, if my neighbors walks in my door at 3 a.m. to let me know I left it unlocked, I'm going to treat them the same way UMN is getting treated in this situation. Or worse.
by toxik on 4/21/21, 10:52 AM
by gjvc on 4/21/21, 11:42 AM
This was his clarification https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
...in which they have the nerve to say that this is not considered "human research". It most definitely is, given that their attack vector is the same one many people would be keen on using for submitting legitimate requests for getting involved.
If anything, this "research" highlights the notion that coding is but a small proportion of programming and delivery of a product, feature, or bugfix from start-to-finish is a much bigger job than many people like to admit to themselves or others.
by g42gregory on 4/21/21, 6:31 PM
by javier10e6 on 4/21/21, 2:31 PM
by ansible on 4/21/21, 12:18 PM
You're just testing the review ability of particular Linux kernel maintainers at a particular point in time. How does that generalize to the extent needed for it to be valid research on open source software development in general?
You would need to run this "experiment" hundreds or thousands of times across most major open source projects.
by angry_octet on 4/21/21, 2:57 PM
Unbelievable that this could have passed ethics review, so I'd bet it was never reviewed. Big black eye for University of Minnesota. Imagine if you are another doctoral student is CS/EE and this tool has ruined your ability to participate in Linux.
by dsr12 on 4/21/21, 3:08 PM
by qalmakka on 4/21/21, 2:21 PM
by segmondy on 4/21/21, 12:49 PM
To mitigate the risks, we make several suggestions. First, OSS projects would be suggested to update the code of conduct by adding a code like "By submitting the patch, I agree to not intend to introduce bugs."
by rincebrain on 4/21/21, 2:05 PM
by endisneigh on 4/21/21, 3:19 PM
That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.
I say this since the crux of what they're trying to discover is:
1. In OSS anyone can commit.
2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.
3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).
by traveler01 on 4/21/21, 11:04 AM
That's a really stupid behavior ...
by robrtsql on 4/21/21, 2:33 PM
by dcchambers on 4/21/21, 1:56 PM
by jl2718 on 4/21/21, 11:38 AM
In closed source, nobody would even check. Modern DevOps has essentially replaced manual code review with unit tests.
by throwawayffffas on 4/21/21, 11:10 AM
by ajarmst on 4/21/21, 3:57 PM
by omginternets on 4/21/21, 1:59 PM
Is there not some sort of equivalent in this field?
by cblconfederate on 4/21/21, 11:21 AM
by devmunchies on 4/21/21, 3:43 PM
Greg: You can't quit, you're fired.
by neatze on 4/21/21, 2:44 PM
Imagine, saying we would like to test how fire department responds to fire, by setting buildings on fire in NYC.
by waihtis on 4/21/21, 11:01 AM
In a network security analogy, this is just unsolicited hacking VS being a penetration test which it claims more so to be.
by veltas on 4/21/21, 2:47 PM
by forgotpwd16 on 4/21/21, 3:19 PM
by kleiba on 4/21/21, 12:38 PM
Obviously, bugs gets introduced into all software projects all the time. And the bugs don't know whether they've been put there intentionally or accidentally. Alls bugs that ever appeared in the linux kernel obviously made it through the review process. Even when no-one actively tried to introduce them.
So, why should it not be possible to intentionally insert bugs if it already "works" unintentionally? What is the insight gained from this innovative "research"?
by causality0 on 4/21/21, 11:54 AM
Responding properly to that statement would require someone to step out of the HN community guidelines.
by closeparen on 4/21/21, 3:40 PM
Social shame and reputation damage may be useful defense mechanisms in general, but in a hacker culture where the right to make up arbitrarily many secret identities is a moral imperative, people who burn their identities can just get new ones. Banning or shaming is not going to work against someone with actual malicious intent.
by djohnston on 4/21/21, 3:21 PM
by nwsm on 4/21/21, 3:39 PM
by 1970-01-01 on 4/21/21, 5:27 PM
by jokoon on 4/21/21, 12:32 PM
Inserting backdoors in the form of bugs is not difficult. Just hijack the machine of a maintainer, insert a well placed semicolon, done!
Do you remember the quote of Linus Torvalds ? "Given enough eye balls, all bugs are shallow." ? Do you really believe the Linux source code is being reviewed for bugs?
By the way, how do you write tests for a kernel?
I like open source, but security implies a lot of different problems and open source is not always better for security.
by protomyth on 4/21/21, 4:51 PM
by aisio on 4/21/21, 4:50 PM
"Plainly put, the patch demonstrates either complete lack of understanding or somebody not acting in good faith. If it's the latter[1], may I suggest the esteemed sociologists to fuck off and stop testing the reviewers with deliberately spewed excrements?"
https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...
by devit on 4/21/21, 11:17 AM
I mean, sneakily introducing vulnerabilities obviously only works if you don't start your messages by announcing you are one of the guys known to be trying to do so...
by chandmk on 4/22/21, 1:38 AM
by philsnow on 4/21/21, 5:14 PM
I would be livid if I found that code from these "researchers" was running in a medical device that a family member relied upon.
by resoluteteeth on 4/21/21, 3:03 PM
by nspattak on 4/21/21, 11:33 AM
by Metacelsus on 4/21/21, 10:47 AM
by inquisitivemind on 4/21/21, 9:28 PM
Insofar as this specific method of injecting flaws matches a foreign country's work done on U.S. soil - as many people in this thread have speculated - do people here think that U.S. three letter agencies (in particular NSA/CIA) should have the ability to look at whether the researchers are foreign agents/spies, even though the researchers are operating from the United States? For example, should the three letter agencies have the ability to review these researchers' private correspondence and social graphs?
Insofar as those agencies should have this ability, then, when should they use it? If they do use it, and find that someone is a foreign agent, in what way and with whom should they share their conclusions?
by atleta on 4/21/21, 4:16 PM
Also, while their assumption is interesting, there sure had to be an ethical and safe way to conduct this. Especially without allowing their bugs to slip into release.
by up2isomorphism on 4/21/21, 10:39 PM
From what I understand, this answer seems to be a "yes".
Of course, it is understandable that GKH is frustrated, and if his community do not like someone pointing out this issue, it is OK too.
However, one researcher does not represent the whole university, so it seems immature to vent this to other unrelated people just because you can.
by bloat on 4/21/21, 3:52 PM
by svarog-run on 4/21/21, 12:00 PM
As far as it's known, garbage code was not introduced into kernel.It was caught in the review process literally on the same day.
However, there has been merged code from the same people, which is not necessarily vulnerable. As a precaution the older commits are also being reverted, as these people have been identified as bad actors
by tsujp on 4/21/21, 3:15 PM
I think the Linux Foundation should make an example of this.
by squarefoot on 4/21/21, 7:20 PM
Sorry for being the paranoid one here, but reading this raises a lot of warning flags.
by seanieb on 4/21/21, 11:00 AM
by francoisp on 4/21/21, 3:18 PM
by nickysielicki on 4/21/21, 7:04 PM
by icedchai on 4/21/21, 4:31 PM
by LordN00b on 4/21/21, 11:03 AM
by arkh on 4/21/21, 2:44 PM
Maybe not being nice is part of the immune system of open source.
by leeuw01 on 4/21/21, 6:28 PM
How can one be so short-sighted?...
[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
by macspoofing on 4/21/21, 2:21 PM
by pushcx on 4/21/21, 7:04 PM
And similarly to U Minn, their IRB covered for them: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
My experience felt really shitty, and I'm sorry to see I'm not alone. If anyone is organizing a broad response to redress previous abuses or prevent future abuse, I'd appreciate hearing about it, my email's on my profile.
by rubyn00bie on 4/21/21, 2:36 PM
by WaitWaitWha on 4/21/21, 1:57 PM
But, allow me to pull a different thread. How liable is the professor, the IRB, and the university if there is any calamity caused by the known code?
What is the high level difference between their action, and spreading malware intentionally?
by jedimastert on 4/21/21, 11:03 AM
by wuxb on 4/21/21, 2:55 PM
by devwastaken on 4/21/21, 3:00 PM
Defund federal student loans. Make these universities stand on their own two feet or be replaced by something better.
by Taylor_OD on 4/21/21, 2:20 PM
by eatbitseveryday on 4/21/21, 4:22 PM
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
by kleiba on 4/21/21, 2:18 PM
by NalNezumi on 4/22/21, 6:13 AM
I'm by no means an security expert nor a kernel contributor but considering he's program comitee, is these kind of practices a common place in Security/Privacy researchers?
Does idea/practises like this get a pass on conference publishing regularly?
[0] https://www-users.cs.umn.edu/~kjlu/ [1] https://www.ieee-security.org/TC/SP2022/cfpapers.html
by sida on 4/22/21, 4:24 AM
Sure, this is "just" a university research project this time. And sure, this is done in bad taste.
But there are legitimately malicious national actors (well, including the US govt and the various 3 letter agencies) that absolutely do this. And the national actors are likely even far more sophisticated than a couple of PhD students. They have the time, resources and energy to do this over a very long period of time.
I think on the whole, this is very net positive in that it reveals the vulnerability of open source kernel development. Despite, how shitty it feels.
by jrm4 on 4/21/21, 3:31 PM
We have an established legal framework to do this. It's called "tort law," and we need to learn how to point it at people who negligently or maliciously create and or mess with software.
What makes it difficult, of course, is that not only should it be pointed at jerk researchers, but anyone who works on software, provably knows the harm their actions can or do cause, and does it anyway. This describes "black hat hackers," but also quite a few "establishment" sources of software production.
by Pensacola on 4/21/21, 10:01 PM
by kerng on 4/21/21, 6:38 PM
by mrleinad on 4/22/21, 2:44 AM
https://twitter.com/UMNComputerSci/status/138494868382169497...
by lamp987 on 4/21/21, 10:59 AM
by anarticle on 4/21/21, 5:03 PM
Now if we can only find more open source developers to punish for trusting contributors!
Enjoy your ban.
Sorry if this comment seems off base, this research feels like a low blow to people trying to do good for a largely thankless job.
I would say they are violating some ideas of Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
by LudwigNagasena on 4/21/21, 3:50 PM
For example, in economics departments there is usually a ban on lying to experiment participants. Many of them even explicitly explain to participants that this is a difference between economics and psychology experiments. The reason is that studying preferences is very important to economists, and if participants don’t believe that the experiment conditions are reliable, it will screw the research.
by ogre_codes on 4/21/21, 6:25 PM
Suggested title:
“Linux Kernel developers found to reject nonsense patches from known bad actors”
by darksaints on 4/21/21, 3:31 PM
by nemoniac on 4/21/21, 12:39 PM
by DonHopkins on 4/21/21, 7:20 PM
by znpy on 4/21/21, 5:29 PM
What happens if any of that patches ends up in a kernel release?
It's like setting random houses on fire just to test the responsiveness of local firefighters.
by ineedasername on 4/21/21, 4:46 PM
It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch.
If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.
by tediousdemise on 4/21/21, 4:57 PM
Imagine how downstream consumers of the kernel could be affected. The kernel is used for some extremely serious applications, in environments where updates are nonexistent. These bad patches could remain permanently in situ for mission-critical applications.
The University of Minnesota should be held liable for any damages or loss of life incurred by their reckless decision making.
by grae_QED on 4/21/21, 7:32 PM
by dynm on 4/21/21, 4:59 PM
by freewizard on 4/21/21, 4:47 PM
However, doing it repeatedly with real names seems not helpful to the community and indicates a questionable motivation.
by bluenose69 on 4/22/21, 9:57 AM
The benefit is twofold: (a) it's simpler to block a whole university than it is to figure out who the individuals are and (b) this sends a message that there is some responsibility at the institutional level.
The risk is that someone writing from that university address might have something that would be useful to the software.
Getting patches and pull-requests accepted is not a guaranteed. And it's asking a lot of kernel developers that they check not just bad code but also for badly-intended code.
I had a look at the research paper (https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...) and it saddens me to see such a thing coming out of a university. It's like a medical researcher introducing a disease to see whether it spreads quickly.
by francoisp on 4/21/21, 3:22 PM
I fail to see how this does not amount to vandalism of public property. https://www.shouselaw.com/ca/defense/penal-code/594/
by im3w1l on 4/21/21, 2:42 PM
by maccard on 4/21/21, 11:25 AM
by rurban on 4/21/21, 4:52 PM
https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
by djhaskin987 on 4/21/21, 7:29 PM
> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits
> Qiushi Wu, and Kangjie Lu.
> To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.
> Note: The experiment did not introduce any bug or bug-introducing commit into OSS. It demonstrated weaknesses in the patching process in a safe way. No user was affected, and IRB exempt was issued. The experiment actually fixed three real bugs. Please see the clarifications[2].
1: https://www-users.cs.umn.edu/~kjlu/
2: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
by darau1 on 4/21/21, 12:29 PM
by CountSessine on 4/21/21, 6:05 PM
by kerng on 4/21/21, 6:32 PM
by beshrkayali on 4/21/21, 2:42 PM
by johncessna on 4/21/21, 6:07 PM
Once they clean out the garbage in the Comp Sci department and their research committee that approved this experiment, we can talk.
by mycologos on 4/21/21, 5:03 PM
However, zooming out a little, I think it's kind of useful to look at this as an example of the incentives at play for a regulatory bureaucracy. Comments bemoaning such bureaucracies are pretty common on HN (myself included!), with specific examples ranging from the huge timescale of public works construction in American cities to the FDA's slow approval of COVID vaccines. A common request is: can't these regulators be a little less conservative?
Well, this story is an example of why said regulators might avoid that -- one mistake here, and there are multiple people in this thread promising to email the UMN IRB and give them a piece of their mind. One mistake! And when one mistake gets punished with public opprobrium, it seems very rational to become conservative and reject anything close to borderline to avoid another mistake. And then we end up with the cautious bureaucracies that we like to complain about.
Now, in a nicer world, maybe those emails complaining to the IRB would be considered valid feedback for the people working there, but unfortunately it seems plausible that it's the kind of job where the only good feedback is no feedback.
by Fordec on 4/21/21, 8:05 PM
This news makes me wish to implement my own block on the same contributors to any open source I'm involved with. At the end of the day, their ethics is their ethics. Those ethics are not Linux specific, it was just the high profile target in this instance. I would totally subscribe to or link to a group sourced file similar to a README.md or CONTRIBUTORS.md (CODERS_NON_GRATA.md?) that pulled such things.
by rurban on 4/21/21, 4:40 PM
At least both of them they are free from such @umn.edu commits with fantasy names.
by Radle on 4/21/21, 2:46 PM
These patches look like bombs under bridges to me.
Do you believe that some open source projects should have legal protection against such actors? The Linux Kernel is pretty much a piece of infrastructure that keeps the internet going.
by mikaeluman on 4/21/21, 6:28 PM
In addition to wasting people's time, you are potentially messing with software that runs the world.
by fennecs on 4/22/21, 4:21 AM
Apart from some perhaps critical unsafe stuff which should have a lot of attention, requiring everything to be safe/verified to some extent surely is the answer.
by largehotcoffee on 4/21/21, 5:39 PM
by ficiek on 4/21/21, 12:55 PM
by gjvc on 4/21/21, 7:51 PM
by wolverine876 on 4/21/21, 6:21 PM
Usually people are admired here for finding vulnerabilities in all sorts of systems and processes. For example, when someone submits a false paper to a peer-reviewed journal, people around here root for them; I don't see complaints about wasting the time and violating the trust of the journal.
But should one of our beloved institutions be tested - now it's an outrage?
by nullc on 4/22/21, 6:40 AM
But it means the regularly 'security' research does ethically questionable stuff.
IRBs exist because of legal risk. If parties harmed by unethical computer science research do not litigate (or bring criminal complaints, as applicable) the university practices will not substantially change.
by Luker88 on 4/21/21, 3:36 PM
https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
It has yet to be published (due next month)
How about opening few bug reports to correctly report the final response of the community and the actual impact?
Not asking to harass them: if anyone should do it, it would be the kernel devs, and I'm not one of them
by fellellor on 4/21/21, 4:56 PM
by GRBurst on 4/21/21, 10:24 PM
The way the university did this tests and the reactions afterwards are just bad.
What I see here and what the Uni of Minnesota seem to neglected is: 1. Financial damage (time is wasted) 2. Ethical reasons of experimenting with human beings
As a result, the University should give a clear statement on both and should donate a generous amount of on money for compensation of (1.)
For part (2.), a simple bit honest apology can do wonders!
---
Having said that, I think there are other and ethically better ways to achieve these measurement.
by AshamedCaptain on 4/21/21, 5:31 PM
Researcher sends bogus patches to bazaar-style project, gets them reviewed and approved, uses that to point how ridiculous the review process of the project is => DON'T DO THAT! BAD RESEARCHER, BAD!
by jokoon on 4/22/21, 1:43 PM
I'm repeating myself, but I'm pretty certain the NSA or other intel agencies (Israel, especially, considering their netsec expertise) have already done it in one way or another.
Do you remember the semicolon that caused a big wifi vuln? Hard to really know if it was just a mistake.
I'm going full paranoiac here, but anyway.
You can also imagine the NSA submitting patches to the windows source code, without the knowledge of microsoft, and so many other similar scenarios (android, apple, etc)
by mabbo on 4/21/21, 3:12 PM
Imagine what happens 25 years from now as some ground-breaking security research is being done at Minnesota, and they all groan: "Right, shoot, back in 2021 some dumb prof got us banned forever from submitting patches".
Is there a mechanism for University of Minnesota to appeal, someday? Even murders have parole hearings, eventually.
by theflyinghorse on 4/21/21, 3:12 PM
Incredible that the university researches decided this was a good idea. Has noone in the university voiced concern that perhaps this is a bad idea?
by psim1 on 4/21/21, 3:42 PM
by ddingus on 4/21/21, 4:07 PM
Aaaaand into the kill file they go.
Been a while since I last saw a proper plonk.
by kevinventullo on 4/21/21, 4:26 PM
by charonn0 on 4/21/21, 3:41 PM
by davidkuhta on 4/21/21, 3:26 PM
by LanceH on 4/21/21, 12:53 PM
by bigbillheck on 4/21/21, 12:31 PM
by skerit on 4/22/21, 8:58 AM
by alkonaut on 4/21/21, 8:13 PM
Whether some unknown contributor can submit a bad patch isn't so interesting for this type of project. Knowing the payouts for exploits, the question is: how much money would one bad reviewer want to let one past?
by kemonocode on 4/21/21, 5:13 PM
So I hear tinfoil is on sale, mayhaps I should stock up.
by GNOMES on 4/21/21, 3:46 PM
by qwertox on 4/21/21, 4:10 PM
I mean, the Kernel is now starting to run in cars and even on Mars, and getting those bugs into stable is definitely no achievement one should be proud of.
by fefe23 on 4/21/21, 3:13 PM
Sure we infected you with Syphilis without asking for permission first, but we did it for science!
by emeraldd on 4/21/21, 10:47 PM
by rwoerz on 4/22/21, 6:01 AM
by mryalamanchi on 4/21/21, 12:52 PM
by hzzhang on 4/21/21, 6:18 PM
by thayne on 4/21/21, 5:52 PM
by mosselman on 4/21/21, 2:56 PM
This is obviously the complete opposite of how you should be communicating with someone in most situations let alone when you want something from them.
I have sure been there though so if anything, take this as a book recommendation for 'How to Win Friends & Influence People'.
by spinny on 4/21/21, 9:08 PM
by shiyoon on 4/21/21, 4:09 PM
Seemed to have posted some clarifications around this. worth a read
by matheusmoreira on 4/21/21, 4:43 PM
by yosito on 4/21/21, 11:19 AM
by honeybutt on 4/21/21, 5:55 PM
by kml on 4/21/21, 4:55 PM
by stakkur on 4/21/21, 3:19 PM
by cmclaughlin on 4/23/21, 4:12 AM
by booleandilemma on 4/21/21, 3:02 PM
I guess it's not as feasible as they thought.
by satai on 4/22/21, 7:33 AM
I think there should be a real world experiment to test it.
by dghlsakjg on 4/21/21, 3:47 PM
https://integrity.umn.edu/ethics
Feel free to write to them
by dumbDev on 4/21/21, 5:08 PM
by jvanderbot on 4/21/21, 7:55 PM
by jc2it on 4/29/21, 3:43 PM
by amarant on 4/22/21, 6:16 AM
New white paper due soon
by omar12 on 4/21/21, 4:57 PM
by ineedasername on 4/22/21, 12:05 AM
I don't think this was the pitch they gave to their IRB.
by dboreham on 4/21/21, 3:55 PM
by jbirer on 4/22/21, 12:55 AM
by MR4D on 4/21/21, 4:14 PM
Greg’s response is totally right.
by redmattred on 4/21/21, 6:00 PM
by devillius on 4/21/21, 12:30 PM
by wglb on 4/21/21, 4:55 PM
To the supply chain type of attacks, there isn't an easy answer. Classical methods left both the SolarWinds and Codecov attacks in place for way too many days.
by dumpsterdiver on 4/21/21, 5:39 PM
by autoconfig on 4/21/21, 12:44 PM
by BTCOG on 4/21/21, 4:09 PM
by xmly on 4/26/21, 9:28 PM
by soheil on 4/22/21, 12:30 AM
by dawnbreez on 4/21/21, 4:54 PM
yes, this is pentesting
by uglygoblin on 4/21/21, 1:47 PM
by liendolucas on 4/21/21, 1:36 PM
by francoisp on 4/21/21, 2:51 PM
by duerra on 4/21/21, 12:09 PM
by beprogrammed on 4/22/21, 4:19 AM
by kome on 4/21/21, 11:43 AM
by HelloNurse on 4/21/21, 3:24 PM
by freewilly1040 on 4/21/21, 7:56 PM
by limaoscarjuliet on 4/21/21, 6:09 PM
by soheil on 4/22/21, 12:24 AM
by lfc07 on 4/21/21, 12:24 PM
by moron4hire on 4/21/21, 11:56 AM
by davidkuhta on 4/21/21, 7:00 PM
by CTDOCodebases on 4/21/21, 2:57 PM
by 8bitsrule on 4/22/21, 1:16 AM
by coward76 on 4/21/21, 2:40 PM
by soheil on 4/21/21, 4:55 PM
by pertymcpert on 4/21/21, 9:18 PM
by nitinreddy88 on 4/21/21, 2:30 PM
by enz on 4/21/21, 3:10 PM
by Apofis on 4/21/21, 3:23 PM
by birdyrooster on 4/23/21, 7:46 PM
by shiyoon on 4/21/21, 4:09 PM
posted some clarifications around this, worth a read
by ilamont on 4/21/21, 5:28 PM
Here's how the professor (a sociologist) described his methodology:
These three sets of behaviors – rigidly competitive pvp tactics (e. g., droning), steadfastly uncooperative social play outside the game context (e. g., refusing to cooperate with zone farmers), and steadfastly uncooperative social play within the game context (e. g., playing solo and refusing team invitations) – marked Twixt’s play from the play of all others within RV.
Translation: He killed other players in situations that were allowed by the game's creators but frowned upon by the majority of real-life participants. For instance, "villains" and "heroes" aren't supposed to fraternize, but they do anyway. When "Twixt" happened upon these and other situations -- such as players building points by taking on easy missions against computer-generated enemies -- he would ruin them, often by "teleporting" players into unwinnable killzones. The other players would either die or have their social relations disrupted. Further, "Twixt" would rub it in by posting messages like:
Yay, heroes. Go good team. Vills lose again.
The reaction to the experiment and to the paper was what you would expect. The author later said it wasn't an experiment in the academic sense, claiming:
... this study is not really an experiment. I label it as a “breaching experiment” in reference to analogous methods of Garfinkel, but, in fact, neither his nor my methods are experimental in any truly scientific sense. This should be obvious in that experimental methods require some sort of control group and there was none in this case. Likewise, experimental methods are characterized by the manipulation of a treatment variable and, likewise, there was none in this case.
Links:
http://www.nola.com/news/index.ssf/2009/07/loyola_university...
https://www.ilamont.com/2009/07/academic-gets-rise-from-brea...
by sadfev on 4/21/21, 8:23 PM
by werber on 4/21/21, 10:07 PM
by metalliqaz on 4/21/21, 12:27 PM
by ne38 on 4/21/21, 8:37 PM
by iou on 4/21/21, 10:42 PM
by shadowgovt on 4/21/21, 4:26 PM
by LegitShady on 4/21/21, 2:32 PM
by balozi on 4/21/21, 4:02 PM
by TacticalCoder on 4/21/21, 12:18 PM
by gumby on 4/21/21, 3:34 PM
by crazypython on 4/21/21, 12:30 PM
by francoisp on 4/21/21, 8:02 PM
by readme on 4/21/21, 11:42 AM
by nabla9 on 4/21/21, 1:16 PM
1) send ethics complaint to the University of Minnesota, and
2) report this to FBI cyber crime division.
by jcun4128 on 4/21/21, 5:19 PM
by foolfoolz on 4/21/21, 2:44 PM
by brundolf on 4/21/21, 9:29 PM
by a-dub on 4/21/21, 9:18 PM
it's good work and i'm glad they've done it, but that's depressing.
now what?
by devpbrilius on 4/21/21, 3:41 PM
by dt123 on 4/22/21, 11:07 AM
by ElectricMind on 4/22/21, 10:49 AM
by arua442 on 4/21/21, 1:47 PM
by treesknees on 4/21/21, 2:06 PM
Unfortunately even if the latest submissions were sent with good intentions and have nothing to do with the bug research, the University has certainly lost the trust of the kernel maintainers.
by WrtCdEvrydy on 4/21/21, 2:21 PM
I back your decision and fuck these people. I will additionally be sending a strongly worded email to this person, their advisor and their whoever's in charge of this joke of a computer science school. Sometimes I wish we had the ABA equivalent for computer science.
by TedShiller on 4/21/21, 9:09 PM
by mort96 on 4/21/21, 2:46 PM
by atleta on 4/21/21, 4:10 PM
by donatj on 4/21/21, 2:09 PM
by WrtCdEvrydy on 4/21/21, 2:22 PM
Do you understand how dumb that sounds?
by kingsuper20 on 4/21/21, 1:16 PM
Given the size and complexity of the Linux (/GNU) codeworld, I have to wonder if they are coming up against (or already did) the practical limits of assuring safety and quality using the current model of development.
by PHDchump on 4/21/21, 4:38 PM
by b0rsuk on 4/21/21, 3:34 PM
Someone should look into who sponsored this research. Was there a state agent?
by calylex on 4/22/21, 12:39 AM
by jtdev on 4/21/21, 6:18 PM
https://experts.umn.edu/en/organisations/confucius-institute
by knz_ on 4/21/21, 3:36 PM
They were almost certainly expecting an obvious bad patch to be reverted while trying to sneak by a less obvious one.
by mnouquet on 4/21/21, 5:22 PM
by unanswered on 4/21/21, 3:43 PM
Maybe I'm just too cynical and paranoid though.
by unanswered on 4/21/21, 3:31 PM
by shadowgovt on 4/21/21, 3:09 PM
by andi999 on 4/21/21, 3:09 PM
by Quarrelsome on 4/21/21, 10:55 AM
by incrudible on 4/21/21, 2:07 PM
The system appears to have worked, so that's good news for Linux. On the other hand, now that the university has been banned, they won't be able to find holes in the process that may remain, that's bad news for Linux.
by mfringel on 4/21/21, 3:18 PM
When a university submits intentionally buggy patches to the Linux Kernel, and the maintainers successfully detect it, the community responds with "That was an incredibly scummy thing to do."
I sense a teachable moment, here.
by InsomniacL on 4/21/21, 11:00 AM
If this was Facebook and their response was: > ~"stop wasting our time" > ~"we'll report you" the responses here would be very different.
by returningfory2 on 4/21/21, 2:59 PM
If you look at the website of the PhD student involved [1], they seem to be writing mostly legitimate papers about, for example, using static analysis to find bugs. In this kind of research, having a good reputation in the kernel community is probably pretty valuable because it allows you to develop and apply research to the kernel and get some publications/publicity out of that.
But now, by participating in this separate unethical research about OSS process, they've damaged their professional reputation and probably setback their career somewhat. In this interpretation, their other changes were made in good faith, but now have been tainted by the controversial paper.
by tester34 on 4/21/21, 11:19 AM
HN: let's hate researcher(s) instead of process
Wow.
Assume good faith, I guess?
by duxup on 4/21/21, 2:49 PM
Universities are places with lots of different students, professors, and different people with different ideas, and inevitably people who make bad choices.
Universities don't often act with a single purpose or intent. That's what makes them interesting. Prone to failure and bad ideas, but also new ideas that you can't do at corporate HQ because you've got a CEO breathing down your neck.
At the University of Minnesota there's 50k+ students at the Twin Cities campus alone, 3k plus instructors. Even more at other University of Minnesota campuses.
None of those people did anything wrong. Putting the onus on them to effect change to me seems unfair. The people banned didn't do anything wrong.
Now the kernel doesn't 'need' any of their contributions, but I think this is a bad method / standard to set to penalize / discourage everyone under an umbrella when they've taken no bad actions themselves.
Although I can't put my finger on why, this ban on whole swaths of people in some ways seems very not open source.
The folks who did the thing were wrong to do so, but the vast majority of people now impacted by this ban didn't do the thing.
by perfunctory on 4/21/21, 11:22 AM
by ilammy on 4/21/21, 11:34 AM
by noxer on 4/21/21, 11:28 AM
The wasting time argument is nonsense too its not like they did this thousands of times and beside that, reviewing a intentional bad code is not wasting time is just as productive as reviewing "good" code and together with the patch-patch it should be even more valuable work. It not only or adds a patch it also make the reviewer better.
Yeah it aint fun if people trick you or point out you did not succeed in what you tried to do. But instead of playing the victim an play the unethical human experiment card maybe focus on improving.