Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Says Spectre and Meltdown Are Too Difficult to Fix (i-programmer.info)
63 points by Merad on Feb 22, 2019 | hide | past | favorite | 6 comments


The conclusion is beautifully scary:

"Computer systems have become massively complex in pursuit of the seemingly number-one goal of performance. We’ve been extraordinarily successful at making them faster and more powerful, but also more complicated, facilitated by our many ways of creating abstractions. The tower of abstractions has allowed us to gain confidence in our designs through separate reasoning and verification, separating hardware from software, and introducing security boundaries. But we see again that our abstractions leak, side-channels exist outside of our models, and now, down deep in the hardware where we were not supposed to see, there are vulnerabilities in the very chips we deployed the world over. Our models, our mental models, are wrong; we have been trading security for performance and complexity all along and didn’t know it. It is now a painful irony that today, defense requires even more complexity with software mitigations, most of which we know to be incomplete. And complexity makes these three open problems all that much harder. Spectre is perhaps, too appropriately named, as it seems destined to haunt us for a long time."

Worth reading the paper [1].

[1] https://arxiv.org/abs/1902.05178


Maybe link to the paper this is based on instead?

https://arxiv.org/abs/1902.05178


Summary:. Never consider data in a process to be confidential if you have evil code in the same process (even if it is in a virtual machine or interpreted).

It doesn't actually seem too big a limitation. Apart from JavaScript, EBPF, postscript, etc. there aren't that many places that potentially evil code is run alongside confidential data.

All those now have to be split out into separate worker processes to be secure. The worker processes can have partially shared address spaces through memory mapping, letting the programmer decide what will be shared with the untrusted code.


> All those now have to be split out into separate worker processes to be secure.

How can separate processes help if, as the article says, even virtual machines are problematic?

It looks like the only solution is to stop running untrusted code. That includes Javascript.


Late follow up here...

Virtual machines are affected because the VM kernel is 'running' in the same process as the virtual machine software, and can therefore exfiltrate data.


Paper makes some interesting points. Honestly, found some more interesting statements than just the overview of the paper this response references.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: