Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Saying "it allows for very concise partial function application" does nothing but repeat the definition. It does not, in particular, offer any reason to want that. What is so special about the first argument, that I want to fix it? Why not the third? Why is what I do to fix the third not just as good for the first?

Pattern matching is a good example of a language feature included because they could not figure out how to provide features expressive enough to be composed to implement the feature in a library.



In a language like Haskell, pattern matching was explicitly chosen to be a primitive operation. It's not that there's no way to put it in a library; it's that it was chosen to be one of the small set of ideas everything else is described in terms of. Along with allocation and function application, you've got the entirety of Haskell's evaluation model. (Note: not execution model. That needs a bit more.) Having such a small evaluation model probably should be taken as evidence the primitives were chosen well.


> Note: not execution model. That needs a bit more.

Actually... not really? You need the foreign function interface to have anything useful to execute, but (unless you're talking about something else?) the execution model is basically just a State monad carrying a unique magic token, built on top of the same evaluation model as everything else.


The execution model needs something that actually drives execution. There needs to be something that explains how IO actions are actually run. The state approach kind of works until you need to explain multiple threads, then it falls over.

Edward Kmett did some work to describe IO as an external interpreter working through a free monad. That approach provides very neat semantics that include multiple threads and FFI easily.

But IO needs something to make it go, and that something needs capabilities that aren't necessary for the evaluation model.


> The execution model needs something that actually drives execution.

Er, yes: specifically, execution is drivenby trying to pull the final RealWorld token out of a expression along the lines of `runState (RealWorld#) (main)`, which requires the evaluation of (the internal RealWorld -> (RealWorld,a) functions of) IO actions provided by FFI functions (hence why you need FFI to have anything useful (or more accurately, side-effectual) to execute).

> until you need to explain multiple threads

I don't. It's a terrible idea that's effectively the apotheosis of premature (and ill-targeted) optimization as the root of all evil.


I'm not trying to dodge your question, but the answer to your question is that until you work with it for a while you're not going to understand it. Any blog-sized, bite-sized snippet isn't impressive. You have to work with it for a while.

I speak many computer languages, and one of the ways I measure them is, "what do I miss from X when using Y?" The answers will often surprise you; the thing you'd swear up and down you'd miss from X you may never think of again if you leave the language, and some feature you hardly consider while you're in the thick of programming in X may turn out to be the thing you miss in every other language for the rest of your life. For me, one of the things I deeply miss from Haskell when programming elsewhere is the fluidity of the use of partial function application through the currying syntax. I see so many people trying to force it out of Haskell into some other language but there just isn't any comparison to the fluidity of

    map someFunc . filter other . map (makeMap x y) $ userList
Currying enables that syntax. If you don't see how, I'm not surprised. Sit down and try to write a syntax extension for an Algol-descended language that makes it work that well. Be honest about it. Run some non-trivial expressions through it. What I show above you should consider the minimum level of expression to play with, not the maximum. If you pick something like Python to work in, be sure to consider the interaction with all the types of arguments, like positional, keyword, default, etc. It can absolutely be done, but short of re-importing currying through the back door I guarantee the result is a lot less fluid.

The question about "what is special about the first argument" is backwards. The utility of the first argument is something you choose at writing time, not use time. The reason why map's first argument is the function to map on and the second is the list to map over is nothing more and nothing less than most of the time, users of map use it as I show above. There's no deep mathematical or group theory reason. If you don't like it there are simple functions to change the orders around, and the goal of picking function argument orders is just to minimize the amount of such functions in the code. Nothing more, nothing less.


OK, thank you. The key seems to be that argument lists for functions in a language with easy currying are carefully designed so as to make currying maximally useful.


That's an example of why Haskell code is often hard to read, and why I'm glad this isn't available in other languages. Why not write out the callback given to map using a let block? Then you can give meaningful names to intermediate values so it's easier to see how the pipeline works.

(Though, functional programmers would probably pass up the opportunity and use one-letter variable names.)


"It depends". My long pipelines were still usually nice to read because I had context-relevant names and I didn't tend to name them with single letters. You could still read off from them what they were doing. However honesty compels me to admit that by Haskell standards I tended to write long variable names. Not everywhere; there's a lot of "f" in Haskell for "functions that I only know about them that they are funcitons" for the same reason one you aren't winning in C by insisting that every for loop has to use a descriptive variable rather than "i". But I tended towards the longer.

In real code I probably would have named it, but example code is always silly.


Is this what you mean?

    let f = makeMap x y
    in map someFunc . filter other . map f $ userList
I suspect not as it certainly doesn't seem to improve the situation. Perhaps you could elaborate?

Edit to add: I was focused on the "write out the callback given to map using a let block" part and not the meaningful name part, but that's perhaps what's confusing about this to me, as (makeMap x y) seems like the most clear name possible here, what else could you do? This?

    let mapOfXy = makeMap x y
    in map someFunc . filter other . map mapOfXy $ userList
It only obscures things.


The feature is not called currying but partial application, it is very nice and intuitive.


Couldn't I "curry" a function

    f(x,y)
to "partially apply" it on x=3 with just:

    function(y) { return f(3,y); }
?

And then I've "partially applied" f. Is this what currying is?

The syntax I posted is ambivalent about which arguments you're partially applying, isn't that superior to only being able to provide the first argument?


Currying is turning a function with arity n into a sequence of unary functions.

  f(x,y) = ... // arity 2
  f = \x -> \y -> ... // sequence of two unary functions
  f x y = ... // more compact representation of the above
Regarding partial application, your JS (?) example is basically what happens under the hood with Haskell, but without the ceremony:

  f x y = ...
  fByThree = f 3
  fByThree y = f 3 y
Those last two are equivalent, but you don't have to include the y parameter explicitly, because f 3 returns a function that expects another parameter (like the function you wrote out explicitly).

And the Haskell version is more general, since it doesn't require a unique function to be written for each possible partial application. Of course, you can do this in languages with closures:

  function fByX(x) {
    return y => f(x,y);
  }
  fByThree = fByX(3);
But in Haskell that extra function isn't needed, it's just already there and called f. Regarding your last statement, there are also combinators in Haskell that allow you to do things like this:

  fXbyThree = flip f 3
  // equivalent to:
  fXByThree x = f x 3
  // JS
  function fXByThree(x) { return f(x,3); }
  // or
  function fXByFixedY(y) { return x => f(x,y); }
  fXByThree = fXByFixedY(3);
So I'm not sure it's strictly superior, it is more explicit though.


Thanks!


This is why I said to be sure to consider all the things you can do with function arguments. You can't in general assume that f(3) for a two-part argument can be read by the compiler as a curry, because the second argument may have a default. You also have to watch out in dynamic languages because the resulting function instead of a value may propagate a long way before being caught, making refactoring a challenge. (In Haskell it'll always blow up because the type system is strong, although you may not get the nicest error.) Imagine you have a whole bunch of code using this approach, and then you set a default value for the second argument. Imagine you're applying it to a whole bunch of code that wasn't written to optimize the first parameter as the one you want to partially apply.

The end result is you need a lot more noise in the partial call to indicate what it is you are doing, no matter how you slice it, and it interacts poorly with default parameters (which almost every language has nowadays) anyhow.


Currying support is orthogonal to how function arguments work. Some languages (e.g. OCaml according to this question[1]) combine named parameters with currying, allowing partial application with any of the function's arguments.

[1] https://stackoverflow.com/questions/3015019/is-there-a-progr...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: