Closure conversion and separate compilation of higher-order function calls
Is there a standard way of dealing with the interaction between separate compilation and different kinds of closure conversion when compiling higher-order function calls?
I know of three function-like constructs that are distinctly compiled in most programming languages: closures, (top-level) functions, and C++-style function objects. Syntactically they are called the same way, but a compiler would optimally generate distinctly-shaped call sites:
Syntax: | clo(args) | func(args) | obj(args)
--------------------------------------------------------------------------------
Codegen: | clo.fnc(&clo.env, args) | func(args) | cls_call(&obj, args)
^ ^ ^ ^ ^
fn ptr | +--"top level" fn --+ |
+--- "extra" param, compared to source type -----+
(In C++, cls_call
would be T::operator()
for obj
's class T
. C++ also allows virtual functors, but that's essentially the closure case with an extra indirection.)
At this point, calls to map (x => x > 3) lst
and map (x => x > y) lst
should invoke different map
functions, because the first is a simple function pointer after hoisting, and the second is a closure.
I can think of four ways of dealing with this issue:
The C++ (98) approach, which forces the callee to either pick a call-site shape (via formal parameter type: virtual functor, function pointer, or non-virtual functor) or drop separate compilation by using a template, effectively specifying solution #2 below.
Overloading: the compiler could do multiple instantiation of
map
, and all other higher-order functions, with appropriate name-mangling. In effect, there is a separate internal function type per call site shape, and overload resolution picks the right one.Mandate a globally uniform call-site shape. This means that all top-level functions take an explicit
env
argument, e开发者_高级运维ven if they don't need it, and that "extra" closures must be introduced to wrap non-closure arguments.Retain the "natural" signature for top-level functions, but mandate that all handling of higher-order function params be done through closures. The "extra" closures for already-closed functions call a wrapper trampoline function to discard the unused
env
parameter. This seems more elegant than option 3, but harder to implement efficiently. Either the compiler generates a multitude of calling-convention-indepedent wrappers, or it uses a small number of calling-convention-sensitive thunks...
Having an optimized closure-conversion/lambda lifting hybrid scheme, with a per-function choice of whether to stick a given closure argument in the env or the parameter list, seems like it would make the issue more acute.
Anyways, questions:
- Does this issue have an explicit name in the literature?
- Are there other approaches besides the four above?
- Are there well-known tradeoffs between approaches?
This is a pretty deep question with a lot of ramifications, and I don't want to write a scholarly article here. I will just scratch the surface and will point you to more information elsewhere. I am basing my response on personal experience with the Glorious Glasgow Haskell Compiler and with Standard ML of New Jersey, as well as scholarly papers written about those systems.
The key distinction made in an ambitious compiler is the distinction between known calls and unknown calls. For languages with higher-order functions, a secondary but still important distinction is whether the call is fully saturated (which we can decide only at a known call site).
A known call means a call site where the compiler knows exactly what function is being called an how many parameters it expects.
An unknown call means the compiler can't figure out what function might be called.
A known call is fully saturated if the function being called is getting all the parameters it expects, and it is going straight to code. If the function is getting fewer arguments than it expects, the function is partially applied and the call results only in the allocation of a closure
For example, if I write the Haskell functions
mapints :: (Integer -> a) -> [a]
mapints f = map f [1..]
then the call to map
is known and fully saturated.
If I write
inclist :: [Integer] -> [Integer]
inclist = map (1+)
then the call to map
is known and partially applied.
Finally, if I write
compose :: (b -> c) -> (a -> c) -> (a -> c)
compose f g x = f (g x)
then the calls to f
and g
are both unknown.
The main thing mature compilers do is optimize known calls. In your classification above this strategy falls mostly under #2.
If all call sites to a function are known, a good compiler will create a special-purpose calling convention just for that function, e.g., passing arguments in just the right registers to make things work out nicely.
If some but not all call sites of a function are known, the compiler may decided it worthwhile to create a special-purpose calling convention for the known calls, which will either be inlined or will use a special name known only to the compiler. The function exported under the name in the source code will use a standard calling convention, and its implementation is typically the thin layer which makes an optimized tail call to the specialized version.
If a known call is not fully saturated, the compiler just generates code to allocate the closure right there in the caller.
The representation of closures (or whether first-class functions are handled by some other technique such as lambda lifting or defunctionalization) is largely orthogonal to the handling of known vs unknown calls.
(It may be worth mentioning an alternative approach, used by MLton: it is a whole-program compiler; it gets to see all the source code; it reduces all functions to first order using a technique I've forgotten. There are still unknown calls because general control-flow analysis in higher-order languages is intractable.)
Regarding your final questions:
I think this issue is just one facet of the messy problem called "how to compile first-class functions". I've never heard a special name for just this issue.
Yes, there are other approaches. I've sketched one and mentioned another.
I'm not sure if there are any great, broad studies on tradeoffs, but the best one I know of, which I recommend very highly, is Making a Fast Curry: Push/Enter vs. Eval/Apply for Higher-Order Languages by Simon Marlow and Simon Peyton Jones. One of the many great things about this paper is that it explains why the type of a function does not tell you whether a call to that function is fully saturated.
To wrap up your numbered alternatives: number 1 is a nonstarter. Popular compilers use a hybrid strategy related to numbers 2 and 3. I've never heard of anything resembling number 4; the distinction between known and unknown calls seems more useful than distinguising top-level functions from arguments of function type.
精彩评论