by phillmv on 11/1/16, 2:13 PM with 256 comments
by Animats on 11/1/16, 5:05 PM
There's a problem at the China end with crap low-end devices driving out the good ones. Here's a good example: solid state relays, useful little devices for safely switching AC power with a logic level signal. Look at this Fotek solid state relay on Amazon.[1] That's a counterfeit. Fake manufacturer name. Fake UL and CE marks. Here's UL's warning notice on counterfeit Fotek solid state relays, and how to recognize fakes.[2] There are lots of unhappy customers; the fake ones have been reported to overheat, melt, or stick in the ON condition. Every Fotek relay on Amazon that I can find is fake.
The fakes are real solid state relays with grossly exaggerated power ratings. For real ones, cost goes up with power. The fakes all cost about the same regardless of nameplate power rating. Here's an especially bad one: a "100 amp" version.[3] The real Fotek, in Taiwan, doesn't even make a 100 amp version in that form factor - the terminals aren't big enough for 100 amps.
The result is that nobody is selling legit solid state relays on Amazon. They exist; you can buy them through Digi-Key or Mouser. They cost about 2.5x the fake price. But Amazon has been totally conned. (The ones on eBay are fake, too.) Worse, if you're a legit solid state relay maker in China, you have a hard time selling. The counterfeits have pushed the price down too far.
Back to hoverboards. There are now UL-approved hoverboards. They don't catch fire. Heavy pressure on China suppliers worked. That needs to happen with insecure IoT devices.
[1] https://www.amazon.com/Frentaly-24V-380V-Solidstate-Arduino-... [2] http://www.ul.com/newsroom/publicnotices/ul-warns-of-solid-s... [3] https://www.amazon.com/Industrial-FOTEK-Protective-SSR-100DA...
by Analemma_ on 11/1/16, 2:57 PM
You could have no security and just get lucky and never get hacked. Or you could have great security and just get really unlucky and have a determined hacker. Or you could be spending uselessly and still getting lucky, although you (and your vendor!) attribute your good fortune to the product. This kind of information failure makes it really hard to have a functional and efficient market, even when everyone involved is honest.
I don't have a good solution for this, which I why I hope someone smarter than me brings it up.
by herghost on 11/1/16, 3:22 PM
Normal folk want to consume new gadgets because that's the culture we have. So it's a race to put new gadgets with new features in front of people. Sure, as a customer I could insist on my manufacturer having taken security seriously and having their products thoroughly tested and reviewed and hardened and patchable and all that good stuff, but then I'm going to have to pay more money for my gadget than my buddy here who just wants to be able to flush his toilet from his smartphone.
There is literally no consequence for manufacturers of poor quality products where the impact isn't directly impacting their own consumers, and so there's no market force that is going to address this.
When viewed this way, it's a classic case of where we need government/legislature involvement.
by djrogers on 11/1/16, 3:56 PM
Don't get me wrong, there are tons of ways in which the security industry fails (the biggest IMHO is buying/selling things that only get implemented in a half-@$$ed manner or not at all), but this is like blaming the Airline industry for a train wreck.
Perhaps the real problem is that for home users there really is no security industry to speak of? A handful of features on WiFi APs that get turned off if they break your XBOX games, and maybe some desktop AV. That's pretty much it - and I'm not sure we can ever expect much more..
by iregistered4 on 11/1/16, 2:51 PM
If anything this is proof that the security industry does work, these attacks are happening on devices where there is no security budget - not on servers with large investments in security.
by achr2 on 11/1/16, 2:50 PM
by skoussa on 11/1/16, 5:46 PM
The root causes are the following: 1- Security more often than not is an afterthought. When you are trying to go to market, under tight deadlines, burning the night oil, nobody has time, energy or money to think about security. 2- The lack of security education by most of the stakeholders (upper management, product managers, engineers, etc) does not help and keep security a taboo, in most organization, nobody has the title of making the software secure. So it falls into nobody's lap 3- While, I have all the respect to the profession of honest sales, some salesmen ruined it for all of us, feasting on the lack of education mentioned above. Trying to sell tools/services as the silver bullet to the security problem, an idea that is very well received by someone who does not understand the problem and really looking for a silver bullet 4- At the end of the day, the real issue is that security is a cost center, there is no ROI for the business for doing security other than avoiding problems that "could" happen in the future.
That being said, there are three classes for clients I have seen doing security: 1- Heavy losses: for banks for example, the risk of losing money is quiet real and tangible. Besides they (at least in the U.S) under heavy regulations to do so. But their real motivation is risk mitigation. 2- Regulations (worst reason to do security): such as the PCI industry, they have to do security checks to avoid fines. This category usually try to do the minimum to get by. 3- Proactiveness: hats off to this category, as they don't really have to do it other than they think that this is something that must be done.
Solutions: 1- More education 2- More education 3- More education 4- Implement more security controls natively into frameworks (output encoding, entity frameworks, etc) and browsers (such as CSP policy, etc) 5- More fines for companies that don't really take the minimum amount of steps to ensure data confidentiality and integrity.
by chubot on 11/1/16, 6:38 PM
He seems immature and vain, because his motive is apparently to taunt someone with how smart he is, but the code is indeed pretty awesome and educational. It's a little sad that commercial software is so ugly and that black hat software is elegant (though I guess it has to be, because it's under rather severe "environmental pressures").
https://github.com/jgamblin/Mirai-Source-Code/blob/master/Fo...
At first, I was also kinda shocked that it had this simplistic list of hard-coded user names and passwords (mentioned in the article). But I guess I've worked in the software industry long enough that it makes sense. Computers are so ubiquitous and on reflection it's not a surprise that you can pull down hundreds of thousands of machines with this technique!!!
Can anyone shed light on the economics of releasing source code? I would think this would make your botnet much less valuable. Apparently someone found a vulnerability in his HTTP parser, which I don't think would have happened without the source code.
So did the author shoot himself in the foot for reasons of pride, or is there something else going on?
https://github.com/jgamblin/Mirai-Source-Code/blob/master/mi...
// Set up passwords
add_auth_entry("\x50\x4D\x4D\x56", "\x5A\x41\x11\x17\x13\x13", 10); // root xc3511
add_auth_entry("\x50\x4D\x4D\x56", "\x54\x4B\x58\x5A\x54", 9); // root vizxv
...
by tptacek on 11/1/16, 3:17 PM
It's true, the 1U rackmount netsec industry does virtually nothing to prevent consumer electronics vendors from shipping terribly insecure code. I don't like the netsec industry either. But: so what?
The reality is, very few companies are buying 1U rackmount snake oil (or Nth generation antivirus products like endpoint protection tools) to stop things like Mirai. We're not even talking about the same budget. The "security industry" is not in fact chartered with stopping things like Mirai. So Mirai is a weird complaint to level at it.
by fulafel on 11/1/16, 2:54 PM
by TACIXAT on 11/1/16, 2:54 PM
Hard coded creds and the allowance of default creds isn't the security industry, it's the manufacturer.
by mrob on 11/1/16, 11:16 PM
This will result in harm to third parties who did not act maliciously, but that's already happening now. With this change in law the total harm will probably be less because the problem will be solved for real, which will dramatically reduce or eliminate the possibility of "black swan" events causing very serious harm (eg. shutdown of critical infrastructure).
by jknoepfler on 11/1/16, 4:04 PM
by evilDagmar on 11/1/16, 11:57 PM
The blame for this debacle falls squarely on the heads of the vendors who produced these trusting (if not downright gullible) devices in the first place.
by zeveb on 11/1/16, 2:46 PM
That, right there, is a damning indictment not only of our industry but also of our culture. We know how to secure systems. It's not magic. But — unlike for example physical hygiene — we haven't made the decision to make computer hygiene part of our culture. We look down on people who don't wash their hands, but we don't look down on people who use poor passwords. We teach children to cover their mouths when they cough, but we don't teach children not to plug a Windows machine into a network.
by ryanlol on 11/1/16, 2:56 PM
Mirai doesn't have shit to do with the security industry. The security industry are the people who you hire to secure your things, victims of Mirai did not take advantage of the services provided by the security industry.
More like, The Mirai Botnet Is Proof the Security Industry Is Going To Be Doing Fucking Great
by mjevans on 11/1/16, 5:11 PM
Of course we know how to write secure code, code that meets a rigorous and well engineered design that eliminates invalid outcomes as a result. The problem is such code is slow and expensive to produce.
Good, Fast, Cheep; pick (at most) two. Security cameras optimize for Cheep first and fast second, so of course we see issues like this.
by delecti on 11/1/16, 3:12 PM
by zby on 11/1/16, 3:01 PM
by MR4D on 11/1/16, 2:55 PM
That would be a start.
Second, any computerized device must pass FTC/FCC/UL (pick one) tests for computer security before going on sale.
There's more that can be done, but let's go after the simple stuff first.
by skywhopper on 11/1/16, 10:31 PM
The only way out of this mess is regulation of what types of devices can be sold and how they must be secured. The electronics industry and online retailers need to get together and figure this out and come up with a UL for IoT, or the government will step in and make them all a lot more unhappy.
by _audakel on 11/1/16, 3:51 PM
by peterwwillis on 11/1/16, 3:05 PM
The security industry has absolutely nothing to do with the existence of a botnet that can take down massive internet infrastructure. The security industry just puts bandaids on shitty products. It's the internet architects/designers that are responsible for botnets.
In order to make the internet very simple, very compatible, and decentralized and distributed, the design allows a baby monitor to send arbitrary traffic to any device on the global network. There is no good reason for this. The reason is, anything else would be complicated, and complicated things become expensive and troublesome. But that's not a good reason to allow baby monitors to take down internet services.
The solution would be to segregate critical equipment address and protocol by function, and to put in strict controls in all routers to prevent illegitimate traffic from reaching the wrong equipment. This would not only improve security, it would make allocation of address space and application ports make some kind of practical sense, and allow for improvements in the way applications communicate over the internet, to say nothing of improved management of traffic.
But nobody's going to change the design, so whatever.
by raesene9 on 11/1/16, 3:21 PM
The simple fact is that there are very limited economic incentives for a company in the IoT space to spend money on security, and as a result they don't.
It's not easy for an ordinary consumer to differentiate between a company who just says "security is our top priority" and one who puts meaningful effort behind that (e.g. there is a strong market for lemons here).
Also there's no effective regulation which could substitute for that information. In other markets (property, consumer goods, food and drink) we have safety regulations as it was recognised that consumers can't effectively differentiate. In IoT and other areas of IT this doesn't exist, so there's nothing to stop insecure devices being sold.
As to the "security industry" well there have been enough practitioners warning about this, to limited effect. Realistically there's a limited amount that can be done without some form of top-down intervention.
by rdiddly on 11/1/16, 7:25 PM
by Silhouette on 11/1/16, 3:15 PM
Obviously defence in depth and dedicated security tools have their place in a networked environment, but you can't just outsource the problem or fix it with some bolted on extra.
Some concerns simply have to be addressed as an integral part of whatever software or device is being made. If we don't do that, well, we've just seen the result.
by moron4hire on 11/1/16, 6:45 PM
Unlike vehicle registration, it wouldn't require you to have to do anything other than keep your system maintained. If you want to put your computer on the internet, be prepared to get port-scanned by the US Digital Service once a year/month/week/whatever, attempting to take your computer off the 'net. If it succeeds, then that's one machine that could have been--but now won't be--part of a botnet.
ChoasMonkey as a public works project.
by cellis on 11/1/16, 3:26 PM
1) A consortium of manufacturers of IoT devices banding together and signing an "autopatch" or "autohack" agreement. This would be an open source, public hack-and-patch society that freezes out any manufacturers that don't agree to it. All customers would simply sign in their EULA that their devices are authorized to be "patched" by any means necessary if found to be insecure by the auto-hackers.
2) As botnets at the Mirai scale are now a matter of national security, make the NSA do its job and do roughly what is outlined in 1. Controversial, sure, but you can be damn sure that they already know about these unpatched devices and how to exploit them.
by FussyZeus on 11/1/16, 3:03 PM
This is made more asinine by the fact that we've had extremely easy to use methods of establishing trust between devices on a permanent basis, but because that would add three steps to the setup process the marketing people refuse to let it happen.
Nobody wants to spend the money to do it right, and nobody wants to spend the money on devices that do it right so here we are and I see no way out of this situation.
by finishingmove on 11/1/16, 9:48 PM
by SFJulie on 11/2/16, 12:32 AM
I guess we can draw a conclusion here: security assumptions about who the users are is not in sync with human nature.
Security is failing the same way as architects would fail making the assumption stairs with one meter high steps are okay.
IT security is failing because their model of human beings is plain and flat wrong, hence, computer security as designed by our brightest mind is wrong.
Don't force feed to human requirements of fuck given they don't have.
by youdontknowtho on 11/1/16, 7:00 PM
by ChefDenominator on 11/1/16, 5:04 PM
by rini17 on 11/1/16, 6:14 PM
by CiPHPerCoder on 11/1/16, 3:02 PM
Let's set blame aside for now. What caused this botnet?
- The tendency of IoT/smart-device vendors to eschew engineering discipline
- The tendency of _all_ companies to eschew security as an optional extra
rather than the cost of admittance to the marketplace
- The historical tendency of big companies /not/ being burned to the
ground after a massive hack makes security a lower priority to
many businesses
- The lack of a secure automatic update infrastructure (which also led to a
recall), for which the vendor could have mitigated the vulnerabilities used
- General ignorance about the risks associated with default/weak/hard-coded
security credentials (e.g. passwords)
Now let's look at each line item and discuss possible solutions: + Regulation could help here. Require third party security assessments on
IoT/smart devices to be sold? It's not the most elegant solution, but it
would be a vast improvement over the current state of affairs.
+ This is a cultural problem that makes application security painful in
every business vertical. It takes a lot of one-on-one communication to
resolve. Seeing large companies lose their shirts over security negligence
might change the conversation.
+ This is a huge problem for all software. (See link below.)
+ Education.
Regarding secure automatic updates: https://paragonie.com/blog/2016/10/guide-automatic-security-...Now let's circle back to blame. What is the security industry responsible for? In my view:
- Failure to communicate with other industries and professions,
such as electrical engineering.
- Failure to communicate with developers in general.
- Failure to educate people outside the industry of our own
conventional wisdom.
- Failure to learn the challenges that others are trying to overcome
so security can be on the same team rather than yet another obstacle.
Through the blog posts on my company's website and a concerted effort to clean up Stack Overflow, I've been trying to educate PHP developers about better security practices for the past couple of years. It pays forward in spades. The rest of the security industry could do a lot of good if they did the same for their own respective communities.The only problem with doing that is: There's no effective and ethical way to monetize it. I make more money from helping e-commerce sites recover from being hacked by easily preventable mistakes than I ever have from making the software that powers 30% of the Internet more secure. https://paragonie.com/blog/2015/12/year-2015-in-review
Solving the core problems is good for society, but society doesn't reward this behavior.
The security industry is broken because society is broken.
by digi_owl on 11/1/16, 11:07 PM
But a GPC will always remain a GPC, and thus they are susceptible to being re-purposed no matter the number of "safeguards" we put in place to prevent it.
by Pica_soO on 11/1/16, 7:15 PM
by aaron695 on 11/2/16, 2:00 AM
by wnevets on 11/1/16, 8:13 PM
by ngneer on 11/1/16, 11:49 PM
by _pdp_ on 11/1/16, 3:15 PM
by WhiteHat1 on 11/13/16, 12:49 PM
Have you checked out this Mirai vulnerability scanner? Something everyone should do – whether a random home user or a large enterprise (and how many have CISOs?). It scans your IP and can pinpoint vulnerable devices: https://www.incapsula.com/mirai-scanner.html
by cloudjacker on 11/1/16, 5:17 PM
Or a complete focus on making money. Capitalism has refined itself over 30 years, and firms realize that security is expensive, making products is a lot cheaper than it used to be, and even if you invested in security, there could still be something unforeseen that compromises your system.
Nobody wants to be Sony or Microsoft and their litany of security woes.