by epoch_100 on 8/28/23, 5:46 PM with 321 comments
by tptacek on 8/28/23, 6:35 PM
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
by jbombadil on 8/28/23, 6:12 PM
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
by f0e4c2f7 on 8/28/23, 6:14 PM
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
by mewse-hn on 8/28/23, 6:09 PM
https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
by icameron on 8/28/23, 6:16 PM
That's wild!
by hitekker on 8/28/23, 6:27 PM
by seiferteric on 8/28/23, 6:08 PM
by SenAnder on 8/28/23, 6:08 PM
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
by pie_R_sqrd on 8/28/23, 7:10 PM
by monksy on 8/28/23, 6:40 PM
Fantastic for calling Fizz out. "Fizz did not protect their users’ data. What happened next?" This isn't a "someone hacked them". It's that Fizz failed to do what they promised.
I'm still curious to hear if the vulnerability has been tested to see if it's been resolved.
by davesque on 8/28/23, 11:11 PM
I don't think this applies to the reporter in this case, but it does seem like there's a bit of a trend in security research lately to capitalize on the publicity of finding a vulnerability for one's own personal branding. That feels a bit disingenuous. Not that the appropriate response would be to threaten someone with legal action.
by ryandrake on 8/28/23, 6:09 PM
It's practically a given that the actual security (or privacy) of a software is inversely proportional to its claimed security and how loud those claims are. Also, the companies that pay the least attention to security are always the ones who later, after the breach, say "We take security very seriously..."
by simonw on 8/28/23, 6:18 PM
by lxe on 8/28/23, 6:32 PM
by dfxm12 on 8/28/23, 7:06 PM
by consoomer on 8/28/23, 7:26 PM
In all honesty, nothing good usually comes from that. If you wanted the truth to be exposed, they would have been better off exposing it anonymously to the company and/or public if needed.
It's one thing to happen upon a vulnerability in normal use and report it. It's a different beast to gain access to servers you don't own and start touching things.
by nickdothutton on 8/28/23, 8:24 PM
“Keep calm” and “be responsible” and “speak to a lawyer” are things I class as common sense. The gold nugget I was looking for was the red flashing shipwreck bouy/marker over the names.
by hermannj314 on 8/28/23, 6:35 PM
Am I to understand you can attempt to hack any computer to gain unauthorized access without prior approval? That doesn't seem legal at all.
Whether or not there was a vulnerability, was the action taken actually legal under current law? I don't see anything indicating for or against in the article. Just posturing that "ethical hacking" is good and saying you are secure when you aren't is bad. None of that seems relevant to the actual question of what the law says.
by 1970-01-01 on 8/28/23, 7:48 PM
by utopcell on 8/28/23, 10:48 PM
Kudos to Cooper, Miles and Aditya for seeing this through.
by SoftTalker on 8/28/23, 6:19 PM
They could threaten to report you to the police or such authorities, but they would have to turn over their evidence to them and to you and open all their relevant records to you via discovery.
> Get a lawyer
Yes, if they're seriously threatening legal action they already have one.
by lightedman on 8/28/23, 11:55 PM
That would've been a better legal threat to put on them as a offensive move, instead of using the EFF. "Sure you can attempt to have me jailed but your threat is clear-cut felony extortion. See you in the jail cell right there with me!"
by sublinear on 8/28/23, 6:12 PM
Maybe it's because I'm getting old, but it would never cross my mind to take any of this personally.
If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
Being the "good guy" can sometimes be harder than being the "bad guy", but suppressing your emotions is a basic requirement for being either "guy".
by michaelmrose on 8/29/23, 4:33 AM
This is wholly and obviously illegal but so is the described ethical hacking. You have adopted a complex nuanced strategy to minimize harm to all parties. This is great morally but as far as I can tell its only meaningful legally insofar as it makes folks less likely to go after you nothing about it makes your obviously illegal actions legal so if you are going to openly flout the law it makes sense to put less of a target on your back while you are breaking the law.
by tamimio on 8/28/23, 11:46 PM
by xeromal on 8/28/23, 7:19 PM
by datacruncher01 on 8/28/23, 7:26 PM
Payouts for finding bugs when there isn't an already established process are either not going to be worth your time or will be seen as malicious activity.
by causality0 on 8/28/23, 8:21 PM
by jccalhoun on 8/28/23, 6:53 PM
by JakeAl on 8/28/23, 6:14 PM
by withinrafael on 8/28/23, 8:10 PM
by noam_compsci on 8/29/23, 12:57 PM
by winter_blue on 8/28/23, 6:53 PM
The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and protect their anonymity while doing all 3 of these.
If these researchers had done (2) and (3) – and done so anonymously, that would have not only protected them from legal threats/harm, but also effectively killed off a company that shouldn't exist – since all of Buzz/Fizz users would likely abandon it as consequence.
by wedn3sday on 8/28/23, 6:56 PM
by pityJuke on 8/28/23, 7:56 PM
[0]: https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
by helaoban on 8/28/23, 6:40 PM
by c4mpute on 8/28/23, 10:25 PM
Nothing else is ethically viable. Nothing else protects the researcher.
by Buttons840 on 8/28/23, 7:21 PM
This during a time when thousands or millions have their personal data leaked every other week, over and over, because companies don't want to cut into their profits.
Researchers who do the right thing face legal threats of 20 years in prison. Companies who cut corners on security face no consequences. This seems backwards.
Remember when a journalist pressed F12 and saw that a Missouri state website was exposing all the personal data of every teacher in the state (including SSN, etc). He reported the security flaw responsibly and it was embarrassing to the State so the Governor attacked him and legally harassed him. https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...
I once saw something similar. A government website exposing the personal data of licensed medical professionals. A REST API responded with all their personal data (including SSN, address, etc), but the HTML frontend wouldn't display it. All the data was just an unauthenticated REST call away, for thousands of people in the state. What did I do? I just closed the tab and never touched the site again. It wasn't worth the personal risk to try to do the right thing so I just ignored it and for all I know all those people had their data stolen multiple times over because of this security flaw. I found the flaw as part of my job at the time, I don't remember the details anymore. It has probably been fixed by now. Our legal system made it a huge personal risk to do the right thing, so I didn't do the right thing.
Which brings me to my point. We need strong protections for those who expose security flaws in good faith. Even if someone is a grey hat and has done questionable things as part of their "research", as long as they report their security findings responsibly, they should be protected.
Why have we prioritized making things nice and convenient for the companies over all else? If every American's data gets stolen in a massive breach, it's so sad, but there's nothing we can do (shrug). If one curious user or security research pokes an app and finds a flaw, and they weren't authorized to do so, OMG!, that person needs to go to jail for decades, how dare they press F12!!!1
This is a national security issue. While we continue to see the same stories of massive breaches in the news over and over and over, and some of us get yet another free year of monitoring that credit agencies don't commit libel against us, just remember that we put the convenience of companies above all else. They get to opt-in to having their security tested, and over and over they fail us.
Protect security researchers, and make it legal to test the security of an app even if the owning company does not consent. </rant>
by kordlessagain on 8/29/23, 12:12 AM
by asynchronous on 8/28/23, 8:40 PM
How do devs forget this step before raising 4.5 million in seed funding?
by aa_is_op on 8/28/23, 8:23 PM
by wang_li on 8/28/23, 6:10 PM