teeML: Trusted Execution Environment for Machine Learning Inference. It's purpose is to enable the querying of any of the LLM APIs, both for open and closed-source models, in a low latency, low cost, and verifiable manner.
High level overview described in docs.galadriel.com
Currently supports calling:
- openai gpt models
- generating images with DALL-E
This project is divided into 3 parts:
- enclave - this where the enclave is built and run
- admin - this is where the admin can interact with the encalve and verify attestation doc
- verify - minimal version of the admin to only validate the enclave's attestation doc
If you came here to just learn how to verify the enclave's attestation doc then see this README
- setup aws nitro enclave supported VM
- recommended to go through tutorials first before running...
- enclave comes with
lbnsm.so
and python is calling it over C binding - the libnsm is rust shared object with python wrapper around it
Setup the admin .env file that is going to be sent to the enclave once it starts:
cd admin
cp .env.template .env # update the .env file with the correct values
Run the enclave:
cd enclave
./run_proxies.sh
./run_enclave.sh
Enclave data example:
{
"Measurements": {
"HashAlgorithm": "Sha384 { ... }",
"PCR0": "e11704780b078425d45dac5f72b523264406531ff6f4611aba908c320a20b5f2ec81404d21f6f0aef415adf2590d4129",
"PCR1": "52b919754e1643f4027eeee8ec39cc4a2cb931723de0c93ce5cc8d407467dc4302e86490c01c0d755acfe10dbf657546",
"PCR2": "b67f9d7d0a69f6eaf2cba87ffbe983eb4491dbb4ac4aef07528cd75327bfd8b5d5122c4f73c61c3836e57363306141cc"
}
}