Hacker Newsnew | past | comments | ask | show | jobs | submit | abrudz's commentslogin

Yes, it can also a fork where A is an array while B and C are function and a tacit atop where either B is a monadic operator and A its array or function operand or A is a function and C is a monadic operator with B being its array or function operand. Finally, it can be a single derived function where B and C are monadic operators while A is B's array or function operand.

Do APL programmers think this is a good thing? It sounds a lot like how I feel about currying in language that have it (meaning it's terrible because code can't be reasoned about locally, only with a ton of surrounding context, the entire program in the worst case)

It gets me thinking about the “high context / low context” distinction in natural languages. High context languages are one where the meaning of a symbol depends on the context in which it’s embedded.

It’s a continuum, so English is typically considered low context but it does have some examples. “Free as in freedom versus free as in beer,” is one that immediately comes to mind.

À high context language would be one like Chinese where, for example, the character 过 can be a grammatical marker for experiential aspect, a preposition equivalent to “over” “across” or “through” depending on context, a verb with more English equivalents than I care to try and enumerate, an affix similar to “super-“, etc.

When I was first starting to learn Chinese it seemed like this would be hopelessly confusing. But it turns out that human brains are incredibly well adapted to this sort of disambiguation task. So now that I’ve got some time using the language behind me it’s so automatic that I’m not really even aware of it anymore, except to sit here racking my brain for examples like this for the purpose of relating an anecdote.

I would bet that it’s a similar story for APL: initially seems weird if you aren’t used to it, but not actually a problem in practice.


It makes parsing tricky. But for the programmer it’s rarely an issue, as typically definitions are physically close. Some variants like BQN avoids this ambiguity by imposing a naming scheme (function names upper case, array names lower case or similar).

I am not good enough with APL to be certain but I think you can generally avoid most of these sorts of ambiguities and the terseness of APL helps a great deal because the required context is never far away, generally don't even have to scroll. I have been following this thread to see what the more experienced have to say, decided to force the issue.

Huh? Currying doesn't require any nonlocal reasoning. It's just the convention of preferring functions of type a -> (b -> c) to functions of type (a, b) -> c. (Most programming languages use the latter.)

Of course it requires non-local reasoning. You either get a function back or a value back depending on if you've passed all the arguments. With normal function calling in C-family languages you know that a function body is called when you do `foo(1, 2, 3)` or you get a compilation error or something. In a currying language you just get a new function back.

Functions are just a different kind of value. Needing to know the type of the values you're using when you use them isn't "nonlocal reasoning".

And it's not like curried function application involves type-driven parsing or anything. (f x y) is just parsed and compiled as two function calls ((f x) y), regardless of the type of anything involved, just as (x * y * z) is parsed as ((x * y) * z) in mainstream languages. (Except for C, because C actually does have type-driven parsing for the asterisk.)

Another way to look at it: languages like Haskell only have functions with one argument, and function application is just written "f x" instead of "f(x)". Everything follows from there. Not a huge difference.


It arguably depends on the syntax.

In an ML-like syntax where there aren’t any delimiters to surround function arguments, I agree it can get a little ambiguous because you need to know the full function signature to tell whether an application is partial.

But there are also languages like F# that tame this a bit with things like the forward application operator |> that, in my opinion, largely solve the readability problem.

And there are languages like Clojure that don’t curry functions by default and instead provide a partial application syntax that makes what’s happening a bit more obvious.


I'll be happy to give you a live personalised intro in https://apl.chat or head over to https://challenge.dyalog.com/ for an automated guided introduction (with a chance of winning a prize).


https://apl.quest might be of interest.


Also (mind the year this is from!):

— Can you teach a computer to write poetry?

— If you can teach it--yes, there's nothing easier. One of the things is that you could do, for example, you could simply give it a collection of poems or prose or whatever you have, and then provide a program which selects pieces from these, either individual words, individual phrases, individual passages, and so on, and merges them together according to some criterion, which you would then write into the program, and also with a certain element of chance. Usually, you know, you'd say, "Well, you want to pick this sometimes, that sometimes." Yes, you can write it, but you raise the question, what would be the point?


Early computer folks were visionaries, they did plenty of stuff and ideas, that we are still slowly catching up to.

Imagine being able to do this on a random tablet (from ),

https://www.youtube.com/watch?v=2Cq8S3jzJiQ

The best we got is something like this,

https://www.youtube.com/watch?v=ifYuvgXZ108


What would be the point, indeed...


Stonks



Think "normal" call syntax like

  foo(bar(baz(42)))
and then remove the superfluous parens

  foo bar baz 42
The expression is evaluated from right to left.

Now, let's make two of the functions into object members:

  A.foo(bar(B.baz(42)))
Remove the parens, extracting the methods from their objects, instead feeding each object as a left argument to its former member function:

  A foo bar B baz 42
This is normal APL-style call syntax; right-to-left if you want.


Oh now I see it, it sorts of reminds me of lambda calculus



APL brings another couple of ternaries (besides for those mentioned with regards to J):

  x f[k] y
This one is universal among traditional APLs (though some modern APLs, including J, remove it). However, it is traditionally only used for a concept of application along an axis of APL's multidimentional arrays, and only for a strictly limited set of built-in "operators". That said, GNU APL and NARS2000 both allow it on user-defined functions (which have infix syntax, just like "operators").

  x n.f y
This is infix application of the function f from the namespace n, so n must have namespace value, but it is still a proper argument because n isn't just an identifier, but can also be an expression:

  x (expression).f y
In fact, the expression can be an entire (multidimentional!) array of namespaces:

  x (namespace1 namespace2).f y
is equivalent to

  (x[1] namespace1.f y[1])(x[2] namespace2.f y[2])
Furthermore, f can be a built-in "operator" which appears to exist in every namespace, but which also takes the current namespace into consideration. For example, x⍳y is the position of y in x and if io0 is a namespace wherein indexing is 0-based and io1 is a namespace wherein indexing is 1-based, we can write:

  'AX' 'BX' (ns0 ns1).⍳ 'X'
and get the result 1 2 because X is at position 1 in AX with 0-based indexing but at position 2 of BX with 1-based indexing. (I'm not saying you should write code like this, but you could.)


Correct. Now try parsing `x (f1 f2 f3 f4 f5 f6) y`!


It looks like that should be

  x f1 ((f2 y) f3 ((f4 y) f5 (f6 y)))
though the docs include the phrase "which is part of a hook" and I don't know what that means.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: