This application demonstrates the use of a simple TVM model in the Intel SGX trusted computing environment.
- The TVM premade Docker image
or
- A GNU/Linux environment
- TVM compiled with LLVM and SGX; and the
tvm
Python module - The Linux SGX SDK link to pre-built libraries
- Rust
- The rust-sgx-sdk
- xargo
Check out the /tvm/install/ubuntu_install_sgx.sh
for the commands to get these dependencies.
If using Docker, start by running
git clone --recursive https://github.com/dmlc/tvm.git
docker run --rm -it -v $(pwd)/tvm:/mnt tvmai/ci-cpu /bin/bash
then, in the container
cd /mnt
mkdir build && cd build
cmake .. -DUSE_LLVM=ON -DUSE_SGX=/opt/sgxsdk -DRUST_SGX_SDK=/opt/rust-sgx-sdk
make -j4
cd ..
pip install -e python -e topi/python -e nnvm/python
cd apps/sgx
Once TVM is build and installed, just
./run_example.sh
If everything goes well, you should see a lot of build messages and below them
the text It works!
.
First of all, it helps to think of an SGX enclave as a library that can be called to perform trusted computation. In this library, one can use other libraries like TVM.
Building this example performs the following steps:
- Creates a simple TVM module that computes
x + 1
and save it as a system library. - Builds a TVM runtime that links the module and allows running it using the TVM Python runtime.
- Packages the bundle into an SGX enclave
- Runs the enclave using the usual TVM Python
module
API
For more information on building, please refer to the Makefile
.
For more information on the TVM module, please refer to ../howto_deploy
.
For more in formation on SGX enclaves, please refer to the SGX Enclave Demo