Hacker Newsnew | past | comments | ask | show | jobs | submit | jstarks's commentslogin

All I’m really hearing is that this guy rubs you the wrong way, so you’re not going to give him the benefit of the doubt that you’d give to others.

I mean, maybe you’re right that his personality will turn everyone off and none of this stuff will ever make it upstream. But that kind of seems like a problem you’re actively trying to create via your discourse.


Well, technically both WaitOnAddress and SRWLOCK use the same "wait/wake by thread ID" primitive. WaitOnAddress uses a hash table to store the thread ID to wake for an address, whereas SRWLOCK can just store that in the SRWLOCK itself (well, in an object on the waiter's stack, pointed to by the SRWLOCK).


> If you want to claim that a language is memory-unsafe, POC || GTFO.

There's a POC right in the post, demonstrating type confusion due to a torn read of a fat pointer. I think it could have just as easily been an out-of-bounds write via a torn read of a slice. I don't see how you can seriously call this memory safe, even by a conservative definition.

Did you mean POC against a real program? Is that your bar?


You need a non-contrived example of a memory-corrupting data race that gives attackers the ability to control memory, through type confusion or a memory lifecycle bug or something like it. You don't have to write the exploit but you have to be able to tell the story of how the exploit would actually work --- "I ran this code and it segfaulted" is not enough. It isn't even enough for C code!


The post is a demonstration that a class of problems: causing Go to treat a integer field as a pointer and access the memory behind that pointer without using any of Go's documented "unsafe.Pointer" (or other documented as unsafe operations).

We're talking about programming languages being memory safe (like fly.io does on it's security page [1]), not about other specific applications.

It may be helpful to think of this as talking about the security of the programming language implementation. We're talking about inputs to that implementation that are considered valid and not using "unsafe" marked bits (though I do note that the Go project itself isn't very clear on if they claim to be memory-safe). Then we want to evaluate whether the programming language implementation fulfills what people think it fulfills; ie: "being a memory safe programming language" by producing programs under some constraints (ie: no unsafe) that are themselves memory-safe.

The example we see in the OP is demonstrating a break in the expectations for the behavior of the programming language implementation if we expected the programming language implementation to produce programs that are memory safe (again under some conditions of not using "unsafe" bits).

[1]: https://fly.io/docs/security/security-at-fly-io/#application...


The thread you're commenting has already discussed everything this comment says.

If you've got concerns about our security page, I think you should first take them to the ISRG Prossimo project.

https://www.memorysafety.org/docs/memory-safety/


In this thread I linked the fly.io security page because it helps us establish that one can talk about _languages_ as being memory safe specifically, which is something it seems you're rejecting as a concept in the parent and other comments.

(In a separate comment about "what do people claim about Go anyhow", I linked the memorysafety.org page, but I did not expect it to help in getting you to the understanding that we can evaluate programming languages as being memory safe or not, where something from the company where someone was a founder seemed more likely to get a person to reconsider the framing of what we're examining)


Huh? No, I'm not. Go is a memory-safe programming language, like Java before it, like Python, Ruby, Javascript, and of course Rust.


So you're saying nobody cares about actual memory safety in concurrent code? Then why did the Swift folks bother to finally make the language memory-safe (just as safe as Rust) for concurrent code? Heck why did the Java folks bother to define their safe concurrency/memory model to begin with? They could have done it the Golang way and not cared about the issue.


I don't know why you're inventing things for me to have said.


What about ROL r/m16, 8?


This would indeed work (and is likely the better solution), but in opposite to BSWAP and XCHG, it also changes flags.


I guess if the arch’s varargs conventions do something other than put each 32-bit value in a 64-bit “slot” (likely for inputs that end up on the stack, at least), then some of the arguments will not line up. Probably some of the last args will get combined into high/low parts of a 64-bit register when moved into registers to pass to the kernel. And then subsequent register inputs will get garbage from the stack.

Need to cast them to long or size_t or whatever to prevent this.


Yes


No inline functions in library headers, then.


Inline is mostly pointless in C anyway though.

But it might be a minor problem for STB-style header libraries.

It's not uncommon for C++ projects to include the implementation of an STB-style header into a C++ source file instead of 'isolating' them in a C source file. That's about the only reason why I still support the common C/C++ subset in my C libraries.


Despite the title, it's not actually open source yet. Soon!


Your NIC can already access arbitrary RAM via DMA. It can read your keys already.


That is often incorrect for Apple computers, whether x64+T2 or aarch64: https://support.apple.com/fr-tn/guide/security/seca4960c2b5/...

And it’s often incorrect on x64 PCs when IOMMU access is appropriately segmented. See also e.g. Thunderclap: https://www.ndss-symposium.org/wp-content/uploads/ndss2019_0...

It may still be true in some cases, but it shouldn’t be taken for granted that it’s always true.


Kernels enable IOMMU of the CPU, which limits the memory areas of the NIC can access to only to the memory it needs to access. This is also why it should be safe to attach pcie over thunderbolt devices.

Although I think for Intel CPUs the mmunuded to be disabled for years because their iGPU driver could not work with it. I hope things have improved with the Xe GPUs.



I think the MPL attempts to be that license.


Supposedly “ampersand” comes from when the alphabet song/rhyme used to end in “and ‘per se’ and”. School kids mushed it together into “ampersand”, and over time it became the name for &.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: