funny thing is that Malloc also behaves like an arena. When your program starts, Malloc reserves a lot of memory, and when your program ends, all this memory is released. Memory Leak ends up not being a problem with Memory Safety.
So, you will still need a borrow checker for the same reasons Rust needs one, and C/C++ also needed.
You right.
Always good to remember that Apple was and still is the main company behind LLVM.
Swift was built and its maintained by the same time that worked in LLVM.
And also, Swift has its own fork of LLVM and LLVM has built-in a lot of features designed for swift like calling convention and async transformation.
The amount of features swift has and is releasing at the same time it has its own LLVM version is just not a thing you can do without a lot of money and years of accumulated expertise.
lol people really say whatever comes to their mind around here don't they? I'm pretty sure all of the companies associated with these targets would strongly disagree with you
That's great!
Interop with C++ is such a complex task. Congratss on your work! It's definitely not an easy thing.
I've always wondered what is the best way to interact with C++ template instantiation while keeping performance.
For a static language, you'd probably need to translate your types to C++ during compilation, ask Clang/GCC/MSVC to compile the generated C++ file, and then link the final result.
And finally, pray to the computer gods that name mangiling was done right.
C ABI is the system V abi for Unix, since C was literally created for it. And that is the abi followed by pretty much any Unix successor: Linux, Apple's OS, FreeBSD.
Windows has its own ABI.
The different abi is pretty much legacy and the fact that x86_64 ABI was built by AMD + Linux etc, while Microsoft worked with Intel for the Itanium abi.
> And that is the abi followed by pretty much any Unix successor: Linux, Apple's OS, FreeBSD.
Even limiting that to “on x64”, I don’t see how that’s true. To make a syscall, the ABI on Linux says “make the call”, while MacOS (and all the BSDs, I think) says “call the provided library function”.
Also (https://developer.apple.com/documentation/xcode/writing-64-b...): “Apple platforms typically follow the data representation and procedure call rules in the standard System V psABI for AMD64, using the LP64 programming model. However, when those rules are in conflict with the longstanding behavior of the Apple LLVM compiler (Clang) on Apple platforms, then the ABI typically diverges from the standard Processor Specific Application Binary Interface (psABI) and instead follows longstanding behavior”
Some of the exceptions mentioned there are:
- Asynchronous Swift functions receive the address of their async frame in r14. r14 is no longer a callee-saved register for such calls.
- Integer arguments that are smaller than int are required to be promoted to int by the caller, and the callee may assume that this has been done. (This includes enumerations whose underlying type is smaller than int.) For example, if the caller passes a signed short argument in a register, the low 32 bits of the register at the moment of call must represent a value between -32,768 and 32,767 (inclusive). Similar, if the caller passes an unsigned char argument in a register, the low 32 bits of the register at the moment of call must represent a value between 0 and 255 (inclusive). This rule also applies to return values and arguments passed on the stack.
Swift has its own ABI and calling convention, so that makes sense that Apple adapted to it.
The system v abi doesn't say anything about syscall.
Windows x86_64 abi is the same abi for x86, for this reason, you can only pass arguments in 4 registers ( while unix uses 6 ) because x86 only had 8 registers.
I think people have expectations that are misaligned with history and reality about this, to be honest. We can't expect all OS to do things in the same way.
C was created to rewrite the UNIX system, and POSIX compliance is followed by all successors, with minimal differences.
When it became clear that "Itanium" was a failure, Microsoft couldn't just pull an ABI out of the box and break all applications, so they just reused the same x86 ABI.
Yeah, that makes sense.
The Rust type system isn't "affine" as in affine logic. Rust allows different forms of contraction, which affine logic strictly prohibits.
And some people like to claim that the Curry-Howard correspondence proves something about their type system, but this is only true for dependently typed languages.
> Rust allows different forms of contraction, which affine logic strictly prohibits.
That's just wrong. Affine logic totally can have contraction for some propositions.
Also, CH totally exists for non-dependently-typed languages -- for instance, there is a beautiful correspondence between the simply-typed lambda calculus and propositional logic. Please stop repeating claims that you apparently do not understand.
The main point of Affine logic is that it doesn't allow contraction, and the Rust type system does allow different forms of contraction. How exactly is Rust an "affine language"?
Yep, that's true.
But multiple immutable shared references are a form of contraction, while mutable references are actually affine.
Swift doesn't have references like Rust, and you can't even have unsafe raw pointers to variables without producing a dangling pointer, but this makes Swift more restrictive and less powerful than Rust.
> multiple immutable shared references are a form of contraction
No, they are not. You're not using a value more than once, you are borrowing it, which is an extension of affine logic but keeps true to the core principles of affinity. I have modeled multiple shared references in an affine logic (look up RustBelt), i.e. in a logic that doesn't have contraction, so we have very hard evidence for this claim.
> The main point of Affine logic is that it doesn't allow contraction, and the Rust type system does allow different forms of contraction. How exactly is Rust an "affine language"?
The point of Affine logic is that it doesn't allow universal, unconstrained contraction, not that you can never do an operation that has the same properties that contraction would have in some circumstances. The same is true of Rust's type system.
I brought up Curry-Howard to explain why I am using an SO post about "affine logic" to make an argument about the definition of "affine language". Both are defined the same way: no (universal) contraction. That claim is obviously correct, so you are going to have to be a more more concrete about which claim you disagree with.
(The other part you said about contraction and affine logics has already been successfully rebutted in some other replies so I won't repeat their points.)
Funny thing is that you can get undefined behavior and segfaults using only "safe rust", and the rust compiler has subtle bugs that allow you to disable important checks (like type checking), which can leave your code completely broken.
But for some crazy propaganda, rust devs believes that any rust code is safe and sound no matter what.
So, you will still need a borrow checker for the same reasons Rust needs one, and C/C++ also needed.