Hacker Newsnew | past | comments | ask | show | jobs | submit | more bitwalker's commentslogin

Erlang absolutely has closures, you are mistaken. What you are referring to are "function captures", which bind a function reference as a value, and there is no environment to close over with those. However, you can define closures which as you'd expect, can close over bindings in the environment in which the closure is defined.

The interaction between hot reloads and function captures in general is a bit subtle, particularly when it comes to how a function is captured. A fully qualified function capture is reloaded normally, but a capture using just a local name refers to the version of the module at the time it was captured, but is force upgraded after two consecutive hot upgrades, as only two versions of a module are allowed to exist at the same time. For this reason, you have to be careful about how you capture functions, depending on the semantics you want.


> but is force upgraded after two consecutive hot upgrades, as only two versions of a module are allowed to exist at the same time.

Force upgraded is maybe misleading. When a module is loaded for the 3rd time, any processes that still have the first version in their stack are killed. That may result in a supervisor restarting them with new code, if they're supervised.


Ah right, good point - I was trying to remember the exact behavior, but couldn't recall if an error is raised (and when), or if the underlying module is just replaced and "jesus take the wheel" after that.


What does is it look like? I was talking about this thing:

   Val = 1, SumFun = fun(X) -> X + Val end, SumFun(2).
It looks like you define arity 1 function that captures Val, while in fact you define arity 2 function and bind 1 as a first argument. Since you can't redefine Val anyway, it's as good as a closure, but technically it doesn't capture the environment.

Maybe I'm mistaken and there is another way to express it?


The example you've given here does not work the way you think it does. I would agree however that the mechanics of closure environments is simpler in Erlang due to the fact that values are immutable, as opposed to closures in other languages where mutability must be accounted for.

I would also note that, for the example you've given, the compiler _could_ constant-fold the whole thing away, but for the sake of argument, let's assume that `Val` is an argument to the current function in which `SumFun` is defined, and so the compiler cannot reason about the actual value that was bound.

The closure will be constructed at the point it is captured, using the `make_fun` BIF, with a given number of free var slots (in this case, 1 for the capture of `Val`). `Val` is written to the slot in the closure environment at this time as well. See the implementation of the BIF [here](https://github.com/erlang/otp/blob/6cefa05a2a977864150908feb...) if you are curious.

At runtime, when the closure is executed, the underlying function receives the closure environment, from which it loads any free vars. In my own Erlang compiler, the closure environment was given via pointer, as the first argument to the function, and then instructions were emitted to load free variables relative to that pointer. I believe BEAM does the same thing, but it may differ in the specific details, but conceptually that is how it works.

The compiler obviously must generate a new free function definition for closures with free variables (hence the name of the function you see in the interactive shell, or in debug output). The captured MFA of the closure is this generated function. The runtime distinguishes between the two types of closures (function captures vs actual closures) based on the metadata of the func value itself.

Like I mentioned near the top, it's worth bearing in mind that the compiler can also do quite a bit of simplification and optimization during compilation to BEAM - so there may be cases where you end up with a function capture instead of a closure, because the compiler was able to remove the need for the free variable in cases like your example, but I can't recall what erlc specifically does and does not do in that regard.


> let's assume that `Val` is an argument to the current function in which `SumFun` is defined, and so the compiler cannot reason about the actual value that was bound.

That was exactly the case I was talking about, because otherwise there is no need to even make arity 2 function. If the value is known at compile time, the constant is embedded into the body of inlined function.

>At runtime, when the closure is executed, the underlying function receives the closure environment, from which it loads any free vars.

To my understanding, no it doesn't, as the value is resolved when the function pointed is created, not when the underlying function executes, which the code you linked shows too. I know it uses the "env" as a structure field, but it's partial application, not the actual closure which has access to parent scope. Consider two counter examples in python:

    for x in range(1,10): ret.append(partial(lambda y: y*2, x)) # that's what erlang does

    for x in range(1,10): ret.append(partial(x, lambda y: y*2)) # that's an actual closure, as all lambdas will return 18 because x is captured from the parent context
But then again, it doesn't matter since variables are assigned only once.

>Like I mentioned near the top, it's worth bearing in mind that the compiler can also do quite a bit of simplification and optimization during compilation to BEAM - so there may be cases where you end up with a function capture instead of a closure, because the compiler was able to remove the need for the free variable in cases like your example, but I can't recall what erlc specifically does and does not do in that regard.

I was looking into it a week ago, and erlc does what I described when it can't figure out the constant at compile time.

add: If we are at it, BEAM doesn't even know about variables, only values and registers anyway, so it has nothing to capture anyway.


> To my understanding, no it doesn't, as the value is resolved when the function pointed is created, not when the underlying function executes, which the code you linked shows too. I know it uses the "env" as a structure field, but it's partial application, not the actual closure which has access to parent scope

The code I linked literally shows that the closed-over terms are written into the closure environment when the fun is created, and if any term is a heap allocated object, it isn't copied into the closure, only the pointer is written into the env. The only reason you can't observe the effects of mutability here is because, unlike Python, there is no way to mutate bindings in Erlang.

Again, this isn't partial application - not in implementation nor in semantics.


>Again, this isn't partial application - not in implementation nor in semantics.

Maybe you will change your opinion if you take a look at the code 'erlc -S' produces for the inline function.


My reading is that they were evaluating leaders on their introversion/extroversion, intuitiveness, and overall success. So you could have both introverted-intuitive and extroverted-intuitive leaders, and overall, the introverted-intuitive leaders were _more_ successful.


BZ2 was one of my favorite games for quite some time after it released, just a blast to play. I had been really into BZ98 before it, and didn't think a sequel would be able to match the magic, but I ended up playing more of BZ2 in the end. Anyway, it's nice to be able to say thanks to one of the devs for all the good times!

Any particular interesting stories about BZ2's development that you recall? Always interesting to hear how games like this come together, so much of the time it seems like more luck than anything!


Glad you liked it!

I worked on 3d modeling/texturing and mp maps, though my time was divided between BZ2 and our other game Dark Reign 2 and other duties, so I was never full-time on BZ2. Of the games I worked on during my 3 years at Pandemic, BZ2 was my favorite.

I was very young when I joined Pandemic, having interned my senior year of high school and then joining full-time that June. It was a wonderful company to work for and had just broken free from Activision 6mo prior so there was a lot of early startup company culture being built. Witnessing how to build a company the right way was very informative to my later career.

Iirc I think we got a bit ahead of our skis with BZ2's engine rewrite, and in retrospect should have treated BZ2's tech as more of an expansion of BZ1's instead of a whole new game. The engine caused a lot of headaches and bugs, and it taught me early in my career that rewrites and new tech aren't always the right decision.

I think the decision to be more ambitious was due to the rapid transition in graphics going on in the late 90s, it was the age of early graphics accelerators like 3dfx Voodoo & Riva TNT. Hard to hit a moving target.

BZ2's codebase had quite the lineage, pieces of it transmogrified over the years from MechWarrior 2 -> Interstate 76 -> BZ1 -> BZ2 -> Dark Reign 2 & Star Wars Clone Wars. Getting ambitious while also dealing with the legacy bits I think contributed to the many early bugs. I wasn't an engineer but did futz around with the particle system which was one of the new fancy parts that got more attention.

BZ2 came in hot for Christmas 99, it wasn't exactly ready and needed a lot of patching over the next several months. If we'd had been less ambitious with the tech I believe it'd have been a decently polished release and had more success, since art, design, and gameplay were not behind. I remember feeling sad we were releasing before it was ready, and the lesson that better planning and tech choices were the way to avoid that feeling.

Unfortunately Activision wasn't a great partner for us as a publisher, there was some bad blood as they didn't like that we'd broken off as a studio rather than stay under their wing, so they didn't do much to promote the game. We were pretty pissed with that. Same thing happened with Dark Reign 2.

As far as design goes, towards the end of 99 I dove into making BZ2 maps, building Ground Zero and I think a few others. Later the engine became known as the Ground Zero or Zero engine, not sure if it was named after the level or just chosen independently. I remember being particularly motivated to work on BZ2 because I really loved the game and it was getting close to release. I had some freedom to decide my time as DR2 was going through a rough spot, so I just decided to throw myself into BZ2.

There were some multiplayer maps I didn't get to finish, including one that was basically a big 3d asset I built in Softimage that was a rock formation with multiple levels that would have really pushed the boundaries of what was possible in BZ2 maps. I'd have loved to see that come together in time, tho I'm not sure what the AI pathfinding would have done with it - I think I designed it specifically for PvP tho. Once BZ2 was released, attention immediately turned to shipping DR2, which had gone through some team turnover and needed a near complete redesign in 6 months. I wound up making all the multiplayer maps for that, and having a great time with that team.

I hope to write up more war stories at another time. It's been 25(!) years now since release. Crazy.


I really enjoyed DR2. I think I had a demo I just played over and over.


iOS does support a form a call screening, called live voicemail, which transcribes the message being recorded by the caller and lets you pick up the call if you want. iOS also supports ambient song identification, with history, which I use frequently. Safari supports extensions, and I believe other browsers can as well, but can't speak to that as I really only use Safari in my phone, even though it's not my primary browser on desktop.

Figured I'd drop a comment to let you know about the others though!


> iOS also supports ambient song identification, with history, which I use frequently.

Does it? I can't find any info about this online and all I can find seems to indicate that you can run Shazam and it scans for some amount of time afterwards but iOS kills it to save battery. It doesn't seem like you can get Google Pixel-like "Now Playing" which I sorely miss on my iPhone 15 Pro.


Right, the person saying "they use it" as opposed to "refer to it" is an indicator. It's a great feature, using on-device "AI" (privacy preserving), and available since Pixel 2 (2018).


That's great news, I didn't know they had rolled out those features. I don't really want to rewrite my extensions for another browser, but I'll see how applicable the others might be.


Those aren't the only failure modes - you can have two sets of servers partitioned from one another (in two different data centers), both of which are reachable by clients. Do you allow those partitions to remain available, or do you stop accepting client connections until the partition heals? The "right" choice depends entirely on the application.


In TFA's world-view clients will not talk to isolated servers.


So that would be choosing consistency over availability.


Yes. That's the point, that cloud changes the trade-off such that consistency becomes worthwhile.


Pretty disappointed to not find sources for the compiler yet, which is the part I'm by far the most interested in. Hopefully that will follow this release soon!


I wouldn't read too much into the syntax - the Haskell-like definitions are just a succinct way to describe the AST representation for the toy call-by-push-value language the post is talking about. Similarly, the syntax that resembles LISP, is actually (I believe) just shorthand for how the language would be represented in an extended lambda calculus with call-by-push-value semantics.

I think it's important to note that the post is clearly written for someone working on the implementation of compilers/type systems. So for example, in a compiler IR, especially those based on an extended lambda calculus representation, functions/closures are often curried, i.e. only take a single argument. As a result it can be an issue at a given call site to know whether a function is "saturated", i.e. all of its arguments have been applied so it will do some work and return the actual result; or whether it is only partially saturated, in which case it returns a new function that needs one less argument than before. I gather that this interacts poorly with typing in some situations, but I'm not super familiar with the issues here as I haven't gone down the rabbit hole of building a compiler with this kind of IR.

That said, as someone who works as a compiler engineer, I do recognize the potential of call-by-push-value, namely that it reifies evaluation semantics in the IR, rather than it being implicitly eager or lazy, and thus having to hack around the lack of one or the other, i.e. implementing lazy evaluation on top of eager semantics or vice versa. Making it a first-class citizen enables optimizations that would otherwise be impossible or very difficult to implement. To be clear though, you can obviously implement lazy on top of eager, or eager on top of lazy, without call-by-push-value - but I think if you anticipate needing to do that, and you are building a new compiler, using a call-by-push-value IR might be a worthwhile approach to take.

There might be more to it that I'm not really seeing yet in my brief reading of it, but the post made sense to me, it's just assuming a lot about the reader (i.e. that you are deeply familiar with compiler implementation, type theory, etc.).


This is said so confidently, but clearly with little to no experience with the military or the VA to back it up. No vet is receiving benefits without something to back up their disability rating, even if that rating is higher than it seemingly "should" be. Some disabilities are easier to establish than others, because the issue causing the disability is so prevalent in the service that the military has little basis to deny them. Hearing loss is a common one. That said, I'm a vet, with significant hearing loss due to my time in (I was an F-16 maintainer on the flightline, an extremely intense noise environment). I'm not eligible for _any_ disability rating at all, because I did not report issues while I was still in the service; but hearing loss issues often take time to become apparent, and it's not like we were being tested for it. It was not uncommon for someone to forget their earplugs (the second layer of hearing protection we had, under our headset), and be essentially forced to go out on the flightline anyway to handle some task around running engines. These events would be downplayed, but just one instance of that can cause irreversible hearing loss.

But that's just a specific issue relevant to my time in the service. The bottom line is that unless you report every little thing that _might_ cause an issue later, you're going to be up shit creek when you get out and find out that something you were exposed to during your service is causing an expensive medical issue that you have to deal with. There are all kinds of fucked up things people had to do while they were in, that they struggle to get compensation for now that the consequences are catching up, because the issues aren't straightforwardly attributable to some event during your service, like with combat injuries. Burn pits are the most notorious one - you have advocates like Jon Stewart fighting in Congress to pass laws to support vets who were forced to work those pits and are now battling all kinds of complications. If anyone should be an obvious beneficiary of the disability system, beyond those with combat injuries, it should be those folks - but they are largely left hanging in the wind. In the case of burn pits especially, the military _knew_ there were consequences to having soldiers work them, but chalked it up as the cost of war.

Frankly, from what I've observed, once vets are out of the service, our country largely gives them the middle finger when they struggle with mental or physical health issues. I say "them" here, because I'm not a combat veteran, and I'm lucky that I had it pretty easy during my tenure (2009-2015), but for many, that is very much not the case. Yeah, nobody in Congress is going to stand up in front of everyone and say "let's make disability benefits harder to obtain" - but I have no doubt they'd have few, if any, qualms about doing it behind closed doors.


No, you’re exactly right, they are just regular singly-linked lists of tuples, nothing more.


The only time `is_` is used, is with functions permitted in guards. These are functions defined in Erlang, and only a small handful exist, and you learn them very early on. With the advent of `defguard`, it is conventional to use `is_` with custom guards as well, but that’s the intuition - guards vs general predicates.


Sure thing. My point is that, if I want to check whether an argument is a keyword list, I have to do extra mental work to guess whether the correct function to use is `is_keyword` or `keyword?`. There also doesn't seem to be a consistent rule I can apply to figure out whether it's one or the other. Conversely, I also get tripped up every time I want to add a guard to a function like, "is the thing I want to test written as a macro or not?".

I understand the reasoning for the distinction and its roots in Erlang, it's just not very elegant to work with.


If you’re having to do extra mental work to guess, it means you don’t have enough familiarity with the language, its type checks/guards, and the standard library. That’s not the language’s fault.

If you want to check the type of a thing, you always want the matching `is_<type>`. It can be used anywhere in your code, including guards. There’s no guesswork involved here. That is the consistent rule.

When you see a function with a `?`, look at the typespec and the function name—give it the argument(s) it expects and it will answer the question on the tin with a boolean. Again, there’s no guesswork. These functions can be used anywhere except guards—that is the consistent rule for boolean functions.

> Is the thing I want to test written as a macro or not?

I have never had to ask myself this question, and I struggle to parse it. Is the “thing” you want to test referring to the value or to the test you wish to perform on that value? Since you’re asking about macros, I’m assuming you mean the guard test. For that, just learn what guards are available[0] (you can also write your own :D ).

[0]: https://hexdocs.pm/elixir/1.15.4/Kernel.html#guards


> If you’re having to do extra mental work to guess, .... That’s not the language’s fault.

Depends. In this case there are definite rule in place which you can learn. Overall, that's definitely language's fault. On other note, Elixir actually very good at consistent naming

- standard library is designed, not meshed up and grew layer by layer like js/php abomination (erlang one is not consistent and it leaks sometimes)

- Pipe operator by its mere presence enforces correct order of arguments, even in 3rd party code


It is not a tool’s fault that, upon providing the relevant material that describes its usage and helps a user learn how to use it, the user ignores that and then blames the tool for not working how they want it to work.


*Assuming naming rules are consistent. After spending 5 years on php, i've never learned its function names and order of arguments. Even daily stuff like `strpos` requires looking at documentation, dozens time per day


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: