There are legit security researchers out there, doing good work and finding real issues, but the vast, vast majority are … not that. If you open a security bug bounty program, for example, you’ll mostly get either auto-generated garbage like this, or just super generic non-issues like exposing numeric ids to clients, using anything less than the strictest CSP headers, etc.
I’m actually surprised there aren’t MORE bogus CVEs out there, based on my experience with bug bounty programs.
> If you open a security bug bounty program, for example, you’ll mostly get either auto-generated garbage like this, or just super generic non-issues like exposing numeric ids to clients, using anything less than the strictest CSP headers, etc.
No need for a bug-bounty program. I receive regular emails to any public address on our website, warning of clickjacking and capture of passwords (there’s no passwords nor sensitive data on our website), not using the strictest of all SPF/DMARC, …, all from “honest security researchers” and “expecting a bounty no less than 150USD”.
I am concerned that if there is an increase in bogus CVEs, they will be taken less seriously over time and a lot of the good important work will be lost. What can we normal non security people do?
Some form of curation of reports seems like it would be an achievable (partial) solution. It seems like it would be less than perfect, but perhaps better than the status quo.
A lot of internal threat teams generate these kinds of reports, usually scoped on the organization(s) they support. Making these publicly available is a tricky proposition as a lot of companies sources are secret sauce kind of deals and they're most valuable when scoped explicitly on the software being used by the organization.
A state sponsored team decides to creates hundreds of bogus CVEs that get through. People stop trusting CVEs and it becomes a dirty word. Legitimate CVEs now start getting ignored and there is no good mechanism to surface those properly to teams that need to know. People's systems are now more attacker friendly. Governments and corporations too.
Or, some private company steps in and says they'll take on the burden of sifting through the bogus. But they are not incentivized in the same way... and might only pick and choose what they work on, or not work on, or ignore their own CVEs.
Or, said state-sponsored team maneuvers itself to be that private company that offers to take on the burden! Even if it can't suppress a CVE entirely without tipping its hand, it could delay the announcement - thus giving its attack teams a heads-up on a security advisory!
Decentralized multi-party peer review might be one way around this - ensuring that no single entity can function as gatekeeper. It's a lot of overhead, though, and there's way more time sensitivity than there is with academic peer review processes. And who adjudicates who is an independent subject matter expert on Postgres?
It's a tough problem, made tougher by the fact that it's a "dark forest" environment where any potential advantage to any party will inevitably be leveraged.
Hire security people that understand CVEs and if they apply to your environment or not.
Unfortunately it's a complicated system and even with valid CVEs out there they don't always make sense for what your application does.
I work in code security and implementation of this software in 'secure' environments and we always have customers complain that scanners find CVEs in our software. Then we have to send links to documentation explaining what the CVE means and that very particular implementation details are involved that make the CVE dangerous that are not met by our application. Then the clients want to do the 'well the app found it, you fix it' crap.
They are the problem everywhere, and if we somehow "fix" CVEs so it works the way they think it works, they will keep being the problem in a hundred other contexts.
Maybe we should focus on some other place than "fixing" CVEs. But I have no idea how to fix IT charlatanism. But well, I have no idea how to "fix" CVEs either...
CVEs don't work all that well in standalone as proof of exploitability.
It's always good to supplement with things like whether exploit tooling for said CVE exists (proof that there's some actual weakness behind the CVE) - or use data feeds like EPSS which rely on actual verified exploits to create their exploit probability score.
I picked up the term "beg bounty" from somewhere a while back. It's a very useful phrase to describe the low-effort run-some-crappy-scan-and-spam-security-at-domain junk that comes through.
A fun side effect of this is that if you aren't a security researcher by trade and you report an actual security issue to a bug tracker, the first dozen responses will be arguing that it isn't a security issue rather than engaging with whether it is a bug, since this is the default response to reports that claim to be about security issues
I think there’s a disconnect here where a CVE was a decent correlation for an actual security issue for a while but at some point but now is seeing significant spam. The issue here is that the people who actually care about this are mostly measuring the wrong metrics: the existence or number of CVEs has never been a good indicator of the security of a product. Everyone here is like “oh if people can search a CVE in my product then it makes it look bad”…well, no, that has never been true. People have conflated CVEs with severe issues but actual security researchers know you can’t just count up bugs to see how bad something is. A CVE ID to them is literally just a tracking number, to make sure all information of a vulnerability is collated, and it continues to do an acceptable job at that.
The blame here lies with the MITRE corporation. They have over the years received millions of dollars to operate that program, and have completely abdicated at this point. First they started farming out CVE assignments, then they stopped interacting with oss-sec, now they just rubberstamp whatever spam comes in their report queue. The CVE program is failing, and it got so bad that passersby with little expertise smelled money, so now we have the failed DWF and the even-quicker-failed UVI, followed by the soon-to-fail GSD.
It's been almost a decade now that MITRE has been mismanaging the resources entrusted to it. I wish to God they'd hire someone who cares enough to fix it.
I agree in general. This post from Raymond does have me wonder why their keyboard input queue is soo large it takes minutes to clear.
In the IP world the idea of buffering a few minutes worth of data is not normal or optimal. There may be some small buffer but generally for best latency and performance we avoid large buffers and drop packets/inputs beyond the buffer capacity.
So while maybe not strictly a security issue (I can understand why someone might think of it as a DoS), I wonder if it made Microsoft reconsider their input queue size.
You joke but I had to deal with an actual "high score CVE" internally that dealt with a denial-of-service vulnerability that happens when an administrator misconfigures the software. It literally boils down to "if you misconfigure the daemon it won't start == OMG HIGH RATING VULN".
One solution is there must be a higher bar of peer review to prevent issuance of bogus CVEs. Another approach would be to separate proposed vulns from confirmed/undisclosed ones. Human with good judgement in the loop is necessary to prevent DoS and spam.
In this case not only was the CVE a bullshit CVE, it also didn't properly scope the "affected" versions. The end result was that the version of the software we were running didn't even have the option that could potentially be misconfigured to cause the denial-of-service.
Wat, anyone can submit CVEs anonymously and they are just… published? Don’t tell the “security researchers” who send me emails about “vulnerabilities” like not having the CSP header on my website…
I am not sure it is what is happening here, but a problem I have seen with internal QA and Security-focused teams is that they are incentivized to file bugs. The number of bugs filed if their monthly/quarterly achievement, it has nothing to do with the quality of those bugs or how they improve the quality of security. Just "hey I found these 10 bugs that I labeled XXX severity".
I just imagine somewhere there are people updating their CV or something else with "filed XX CVE's" like it is some kind of accomplishment for them.
This is an issue with QA as a whole. It shouldn’t be some bug counting department - if they did their job well then they won’t find bugs at the end of the process (because bugs were found left, and bugs will still exist because exhaustive search isn’t possible).
If you start quantifying the QA value on bugs found then worst case they don’t announce bugs on a month they’ve found their quota..
Hey they automated that where we are. I will spend several hours a week remediating impossible to exploit vectors to tick compliance boxes, while aircraft carrier sized holes in the front end go unpatched because they are hard to fix.
Asymmetric warfare here: it's easy for someone to run a security scanning tool and very difficult for someone to debunk the false positives it finds. But those two people don't work for the same boss.
You are an oracle. That thing is such a super pain in the a##, it has so many false positivies all over flagged that, I have to spend immense hours reviewing garbage failures on daily basis.
But these are two very different things. Sonarqube is SAST (static analysis, reads the code you wrote) and SCA (composition analysis, reads the dependencies you declare). Wiz is just SCA.
Sonarqube fps come largely from untuned SAST configurations flagging all manner of suspected CWEs (code weaknesses).
Sounds terrible. Doesn't have to be that way though. If it's not actionable and relevant, nobody should see it. If you really adopt this approach, the tool choice doesn't really matter except for the varying complexity of filtering to ensure only the good stuff gets bubbled up.
Some of our engineers talk about SonarCube this, dependabot that, Snyk this so much I am suspicious any actual work is done. Shackles made of red tape. Standstill is velocity. Freedom is slavery?
- A clear policy from the CNAs that describes exactly which bugs should be assigned CVEs.
- A process by which bogus CVEs can be invalidated, and CVE spammers can be banned from further submissions.
- A way to register CVEs for vulnerabilities that span multiple programs.
- e.g. an HTTP proxy has a low-risk bug, and an HTTP server has a low-risk bug, but when the proxy and the server are deployed together, the bugs become exploitable.
- An appeals process for CVE description updates.
- e.g. You would not know from the description that CVE-2023-34188 is trivially exploitable and can reliably lock up vulnerable servers because MITRE refuses to update it.
I think a better approach would be to acknowledge that the CVE system thoroughly conflates two almost orthogonal things:
1. CVE numbers are a way to refer to a (potential) issue. With a CVE number, one can look up an issue in a distro bug tracker or ask support a question or search mailing lists.
2. A CVE number comes with an assessment of whether an issue is real and how bad it is, often in the NVD.
From the FAQ [0]:
> No, CVE is not a vulnerability database. CVE enables the correlation of vulnerability data across tools, databases, and people. This enables two or more people or tools to refer to a vulnerability and know they are referring to the same issue.
So I think that a CVE should probably be rejected if it’s a duplicate, but not just for being a non-issue. But anyone using a CVE as evidence that an issue exists and thinks that NVD is doing a bad job should complain about the NVD, not CVE spam. And people should be extremely cautious about expecting the number of CVEs to mean much.
Right now the CVE process doesn't allow unilateral rejection of CVEs by maintainers because they have the opposite incentive. It is in their interest to deny that a vulnerability exists, both because there is a perception that more vulnerabilities discovered means lower quality software, but also because that is extra effort their team needs to handle. It's not ethical but its also not uncommon for companies to not investigate and just deny a bug is real. Sometimes it isn't even an ethical issue but just poorly described by the reporter, or something that seems absolutely implausible to the developers.
I don't know what improvements to the current process actually look like but it needs to be able to account for effectively fraud and apathy on both side of the equation.
Not trying to be rude, but do you have an example of the “http server and proxy” case? Seems to me if the server is vulnerable, then a proxy in your situation just makes it easier to exploit?
Until recently, LiteSpeed parsed Content-Length values using strtoll in the base-0 mode. Thus, by sending Content-Length values prefixed with 0, you can get it to interpret the value in base-8. Most HTTP proxy servers strip leading 0s from Content-Lengths, rendering the bug in LiteSpeed not exploitable. Until recently, HAProxy didn't do this, which made HAProxy + LiteSpeed vulnerable to request smuggling.
I put together a PoC demonstrating how this can be used to bypass any HAProxy ACL with default configurations for HAProxy (except the added ACL) and LiteSpeed.
Clearly, LiteSpeed is more responsible for this problem than HAProxy, but the bug in LiteSpeed violates HAProxy's security model, not its own.
anyone can make a CVE and get it totally escalated out to be broadcast on all the CVE reporting sites for all eternity, and you as the maintainer of a project might never be notified at all. there's a whole community of people, somewhere, who just like to come up with really stupid CVEs and put them up. They get mirrored on hundreds of feeds / sites and there seems to be no vetting of any kind (I really don't know). Then the whole world thinks your project is insecure. I typically find these CVEs, well previously, by seeing them on Twitter. Even if these CVEs weren't friviolous, how is that the right way to go about reporting CVEs? it seems to be a pretty awful system.
It’s funny, it used to be impossible to get valid vulns assigned a CVE if the vendor wasn’t cooperating.
And now it seems it’s possible to spam bogus CVE entries for mostly OSS projects, which devalues the use of CVE… while it’s also nearly impossible to get a valid CVE if a vendor who is a CNA stonewalls you.
A CVE is just a common identifier, like a bug number when you file a bug report. Just because you can file a bug in most projects doesn't mean these bug numbers present valid bugs, and the same applies to CVEs. Some in the industry tend to assume that every CVE is valid without considering it for themselves. That's where the problem begins.
Bug reports are opened and then, after they get a number, are triaged.
CVE numbers are applied for, and then separately they are granted only sometimes.
A CVE number being granted means that an organization, a relevant CNA[0] for the project in question, has confirmed that they think it is a real security issue.
MITRE says CVE IDs indicate a real vulnerability, not "reports that have not been triaged yet", so clearly it's intended that they're mostly valid.
It is true that you have to think critically about CVEs, obviously, but I don't think it's helpful to excuse MITRE for this by saying "no, actually, the numbers were never meant to mean anything", when they themselves have never held that stance.
> confirmed that they think it is a real security issue
There is no definition or even industry consensus of what is a real security issue. Scoring systems are widely accepted as flawed. So in practice, this isn't how it works, as the article demonstrates.
The fun part is that EU considers forbidding putting a product on the market with known security issues. Red Hat has stated on the record that they often get a fresh CVE in just 15 minutes since they publish a container. If this passes as drafted, some sort of moderation will be put in place to protect "business as usual" for sure.
The solution to this problem is to require the submitter to include a unit test that demonstrates the problem along with the CVE. If the unit test succeeds in DDosing or whatever, then the CVE is published. If your unit test fails to produce the security problem, then it is ignored.
The article claims that this denial of service attack would rate a 9.8 on the CVE scale if the CVE wasn't bogus, but NIST has this handy calculator, and it's saying 4.9 for me:
(I can get it to report a severity of 8.9 by setting the availability impact to "High". However, I don't think that's appropriate, since you can just restart the postgres daemon, assuming repeated SIGHUPs do actually crash it.)
This and the strange cURL CVE suggest it's a good time for vulnerability scanners to support VEX. Package publishers or users of vulnerability scanners can create VEX documents which would help prevent their releases and so on from being blocked on these kinds of CVEs.
There is something, sadly likely illegal in most of the world, that would help: a blacklist of clout chasing fake security researchers, widely shared by people in position of offering an interesting job, or any job really.
Permitting to fill CVEs without the cooperation of editors is sadly needed, so it's the easiest solution.
They are destroying a public good and deserve no less.
So... is it time yet to develop mitigations against DOS attacks by security researchers? The first thing I would suggest is that CVEs should only be requested by first parties (e.g. the manufacturer/developers), but that of course gives manufacturers too much leeway to deny real security issues.
Maybe some proof-of-work rating? For example, CVE severity is assigned not on theoretical susceptibility, but on actual breach level that the researcher is able to achieve.
The problem, as always, is trust, and trust arbitration.
CVEs are typically listed by their threat level of the problem. You can probably fine hundreds of non-critical CVEs in many base containers for applications so even PostgreSQL will have it.
The best thing a company can do is ensure there is no path to vulnerability such as exposing the applications to manipulation or inside network attack. This will buy any company worth its mettle time to fix the problems or go through proper risk assessment.
Number one rule always have tightly controlled ingress and egress protecting your applications.
When you're starting off as a small business the whole concept of security escapes most people. So you have to first control your network. Once you have that in place you can make the important changes for security. Probably my fault for not explaining what I mean, but you need to start tackle the most vulnerable point first.
The bogusly reported CVEs could be part of a more sophisticated reverse-attack attempt.
Suppose you run PostgreSQL 13 freshly patched, and a new vulnerability is introduced anonymously in the latest minor version (e.g. version 16.3). The attempt to make users update to the latest version could lead to more severe consequences than not updating.
I doubt it's that sophisticated. There's unfortunately an alignment problem where you can boost your resume by saying you "discovered" CVEs, and it's hard for a hiring company to check if a CVE in some random program is bogus garbage or real issue. As a result people are incentivised to file garbage CVEs. See also: garbage answers on Stack Overflow.
Edit: To expand a bit on "alignment issue". CVEs are easy to create and are quite a trusted system. There's no penalty for the reporter if they are bogus (discovered or not). Also there's a huge amount of downstream work that happens after a CVE is reported, eg. at Red Hat we have teams of people who run around categorizing and deploying fixes, customers have to update, there are security reporting tools, even government regulations. None of this burden is carried by the reporter if they are wrong.
But even "anonymous" CVEs have this problem since in some teams you can get raises/bonuses based on the number of CVEs you report. See the other reply here: https://news.ycombinator.com/item?id=37406345
Not necessarily: that email reported a legitimate bug and identifies a buffer overflow. They don't mention anything about it being exploitable or try to work towards this. Then the CVE was published >20 days later. It's very possible someone else watching the Postgres DL saw an email with "buffer overflow" and pushed it out.
My conspiratorial read was that these could be an effective way to raise the noise floor. Make it difficult to impossible for a security team on a project to ever "catch up" and get to zero reported risk, get them acclimatized to constantly having some open security issues, boil the frogs slowly.
And while this is probably just a naive attempt to get yourself recognized, I like this take, too.
I’m actually surprised there aren’t MORE bogus CVEs out there, based on my experience with bug bounty programs.