Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Even if Rust's type system prevents many mistakes, including memory errors, the code is still not immune to vulnerabilities, such as unexpected panics or wrongly implemented business rules.

> The way to go further is to mathematically prove that it is bug-free: this is named "formal verification" and what coq-of-rust proposes! This is the only way to ensure your code contains no bugs or vulnerabilities, even against state-level actors .

I like formal verification. But I consider this a misrepresentation of what it offers. It lets you mathematically prove that your code implements a specification correctly. It doesn't prove that your specification is bug-free & vulnerability-free. It doesn't prove that your specification correctly implements your business rules.

Formal verification is still extremely useful. It's often much easier to use a specification language to define invariants your programming language code needs to obey than to only implement things in a programming language. To me that's a much better, more honest selling point for formal methods. You have to write more code, and in a different language, but unlike just testing you can actually prove that invariants hold for all possible inputs.



It's very useful for things where the specification is much simpler than the implementation. Databases and file systems are good examples. A database can be specified as giant arrays with linear search. Now prove that the database with all its optimizations and concurrency behaves the same as the simple spec.


Even the trivial specification "given a valid input, the program exits successfully with some arbitrary output" will already get you very far: to prove this trivial specification as correct, you need to prove the absence of panics, undefined behavior or infinite loops. Doing that for a complex program will require formulating useful invariants and proving those. So even if using the trivial specification for the overall program, you'll likely find that the proof will require you to write actually useful specifications for several components of your program (because the program likely contains code that relies on the correctness of the component, in order to prove absence of crashes in the code using the component).


> to prove this trivial specification as correct, you need to prove the absence of panics, undefined behavior or infinite loops.

Note that Rust supports this already for a significant subset of code, namely 'const fn' code. This is also the subset of Rust that could most feasibly be extended to support something akin to theorem proving since it's supposed to be safely evaluated "at compile time", just like a Coq proof term or script. If Rust had some simple but comprehensive support for raw "proof terms" - namely, expressions evaluated at compile time which could either result in () or abort the build, I assume that much of the "convenience" features involved in an actual theorem prover could be implemented as macros and live in custom crates. But we don't know what a raw "proof term" could look like in Rust for useful program properties.


Using (formal) specification languages doesn't guarantee a specification is "correct" either.

You have to write more code, and in a different language, but unlike just testing you can actually prove that invariants hold for all possible inputs.

Formal verification is needed to show that the code actually implements a specification. If anything, a formal specification is really good for generating test suites for the implementation code.


> It doesn't prove that your specification is bug-free & vulnerability-free.

There are formal development methodologies where you start with a small set of very simple axioms and you iteratively refine them. Eventually, you end up with code. All steps are formally verified along the way. This addresses your (valid) concerns. For example, many railway software control systems have been developed this way. See e.g. references [1,2] for some more in-depth discussion. Some of these ideas might make a comeback given that code generation is cheap now, but it's quite unreliable.

[1] https://raisetools.github.io

[2] https://raisetools.github.io/material/documentation/raise-me...


Yeah that's fine if you can do it. His point is that most software is not that simple. I work in RISC-V verification and the spec is full of ambiguously worded or easy to misinterpret or just plain unspecified requirements. Even if you formally verify your chip against a formal specification there's a huge gap between the asciidoc ISA manual and that formal specification where bugs can exist.

Something like compression is the absolute best case - you have a trivial property that covers everything. Mostly it isn't like that though.


AWS says that when code has a good set of formal verification they can be as aggressive as possible in performance optimization without having to worry they are breaking the logic of the code. This helped them double the performance of the IAM ACL evaluation code.


> It doesn't prove that your specification is bug-free & vulnerability-free.

Mathematically prove might be a little strong, but if the implementation agrees then it is proven in the more general sense of the word. Similar to double entry accounting, there are two sides to the coin and both sides have to agree else you know there is a problem somewhere. To make the very same mistake twice here is statistically unlikely, so when they do agree there is sufficient evidence to believe that it is proven.

What is more likely is that one does not take the time to truly understand what they need from software and introduce features that, in hindsight, they wish they hadn't, but that is done with more intentionality and isn't really the same thing as a bug or vulnerability traditionally.


You are describing "validation": the process of verifying that your spec really says what you mean.


He is saying that original spec can be incorrect.


Yes. And the post you're replying to is pointing out that that would be part of `Validation` to decide. Verification is confirming the software meets the spec. Validation is confirming that the software meets the actual desired behavior.


I think the comment was way shorter than it currently is when I replied. It is sometimes difficult to say what is meant with "validation". Many also use it to mean "functional testing", for example, which is incorrect.


based on what amw-zero stated, they're using it in the formal QC sense like ISO9001. In that context the two words have distinct meanings.

I do agree that colloquially they can be used in different ways, but as we're talking about formal verification as a concept, in this case verification aligns with that definition.


Yes - that is what [Validation](https://en.wikipedia.org/wiki/Verification_and_validation) is for. I'm not saying it's easy (it's not). But there is a term for this.


I sort of imagine (but I'm not experienced enough to actually state) that you could create a negative specification. Something like: this program will certainly not open the file flag.txt

This doesn't guarantee the program is correct. But it's not nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: