A wide variety of tools can be applied to LLVM IR. For example, as demonstrated in Chapter 1, LLVM Design and Use, the IR can be dumped into bitcode or into an assembly. An optimization tool called opt can be run on IR. IR acts as the common platform—an abstract layer for all of these tools.
JIT support can also be added. It immediately evaluates the top-level expressions typed in. For example, 1 + 2;
, as soon as it is typed in, evaluates the code and prints the value as 3
.
Do the following steps:
toy.cpp
file:static ExecutionEngine *TheExecutionEngine;
toy.cpp
file's main()
function, write the code for JIT:int main() { … … init_precedence(); TheExecutionEngine = EngineBuilder(TheModule).create(); … … }
toy.cpp
file:static void HandleTopExpression() { if (FunctionDefAST *F = expression_parser()) if (Function *LF = F->Codegen()) { LF -> dump(); void *FPtr = TheExecutionEngine->getPointerToFunction(LF); int (*Int)() = (int (*)())(intptr_t)FPtr; printf("Evaluated to %d ", Int()); } else next_token(); }
Do the following steps:
toy.cpp
program:$ g++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core mcjit native` -O3 -o toy
$ vi example
… 4+5;
$ ./toy example The output will be define i32 @0() { entry: ret i32 9 }
The LLVM JIT compiler matches the native platform ABI, casts the result pointer into a function pointer of that type, and calls it directly. There is no difference between JIT-compiled code and native machine code that is statically linked to the application.
3.133.126.199