Last modified: 11. August 2025 16:38 (Git # Written by alex_s168
)9c2913af
Function calls have some overhead, which can sometimes be a big issue for other optimizations. Because of that, compiler backends (should) inline function calls. There are however many issues with just greedily inlining calls…
This is the most obvious approach. We can just inline all functions with only one call, and then inline calls where the inlined function does not have many instructions.
Example:
function f32 $square(f32 %x) {
@entry:
// this is stupid, but I couldn't come up with a better example
f32 %e = add %x, 0
f32 %out = add %e, %x
ret %out
}
function f32 $hypot(f32 %a, f32 %b) {
@entry:
f32 %as = call $square(%a)
f32 %bs = call $square(%b)
f32 %sum = add %as, %bs
f32 %o = sqrt %sum
ret %o
}
function f32 $tri_hypot({f32, f32} %x) {
f32 %a = extract %x, 0
f32 %b = extract %x, 1
f32 %o = call $hypot(%a, %b) // this is a "tail call"
ret %o
}
// let's assume that $hypot is used someplace else in the code too
If we inline the
calls, then $square
will have too many instructions to be inlined into $hypot
:$tri_hypot
...
function f32 $hypot(f32 %a, f32 %b) {
@entry:
// more instructions than our inlining treshold:
f32 %ase = add %a, 0
f32 %as = add %ase, %a
f32 %bse = add %b, 0
f32 %bs = add %bse, %b
f32 %sum = add %as, %bs
f32 %o = sqrt %sum
ret %o
}
...
The second option is to inline the
call into $hypot
. (There are also some other options)$tri_hypot
Now in this case, it seems obvious to prefer inlining
into $square
.$hypot
If we assume the target ABI only has one f32 register for passing arguments, then we would have to generate additional instructions for passing the second argument of
, and then it might actually be more efficient to inline $hypot
instead of $hypot
.$square
This example is not realistic, but this issue actually occurs when compiling lots of code.
Another related issue is that having more arguments arranged in a fixed way will require lots of moving data arround at the call site.
A solution to this is to make the heuristics not just output code size, but also make it depend on the number of arguments / outputs passed to the function.
function f32 $myfunc(f32 %a, f32 %b) {
@entry:
f32 %sum = add %a, %b
f32 %sq = sqrt %sum
...
}
function $callsite(f32 %a, f32 %b) {
@entry:
f32 %as = add %a, %a
f32 %bs = add %b, %b
f32 %x = call $myfunc(%as, %bs)
...
}
If the target has a efficient
operation, then that operation will only be used if we inline hypot
into $myfunc
.$callsite
This means that inlining is now depended on… instruction selection??
This is not the only optimization prevented by not inlining the call. If
were to be called in a loop, then not inlining would prevent vectorization.$callsite
A related optimization is “outlining”. It’s the opposite to inlining. It moves duplicate code into a function, to reduce code size, and sometimes increase performance (because of instruction caching)
If we do inlining seperately from outlining, we often get unoptimal code.
We can instead first inline all inlinable calls, and then perform more agressive outlining.
We inline all function calls, except for:
There are many algorithms for doing this.
The goal of this step is to both:
The goal is to reduce size of outlinable sections, to make the code more optimal.
This should be ABI and instruction depended, and have the goal of:
this is also dependent on the targetted code size.
This is obvious.
Inlining all function calls first will increase the memory usage during compilation by A LOT
I’m sure that there is a smarter way to implement this method, without actually performing the inlining…
Function inlining is much more complex than one might think.
PS: No idea how to implement this…
Subscribe to the Atom feed to get notified about futre compiler-related articles.